idnits 2.17.00 (12 Aug 2021) /tmp/idnits34098/draft-ietf-nvo3-use-case-16.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (February 10, 2017) is 1926 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'NVO3-ARCH' is mentioned on line 182, but not defined == Missing Reference: 'RFC 4364' is mentioned on line 263, but not defined == Unused Reference: 'NIST' is defined on line 548, but no explicit reference was found in the text == Outdated reference: draft-ietf-nvo3-mcast-framework has been published as RFC 8293 Summary: 0 errors (**), 0 flaws (~~), 5 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group L. Yong 2 Internet Draft L. Dunbar 3 Category: Informational Huawei 4 M. Toy 5 Verizon 6 A. Isaac 7 Juniper Networks 8 V. Manral 9 Ionos Networks 11 Expires: July 2017 February 10, 2017 13 Use Cases for Data Center Network Virtualization Overlay Networks 15 draft-ietf-nvo3-use-case-16 17 Abstract 19 This document describes data center network virtualization overlay 20 (NVO3) network use cases that can be deployed in various data 21 centers and serve different data center applications. 23 Status of this Memo 25 This Internet-Draft is submitted to IETF in full conformance with 26 the provisions of BCP 78 and BCP 79. 28 Internet-Drafts are working documents of the Internet Engineering 29 Task Force (IETF), its areas, and its working groups. Note that 30 other groups may also distribute working documents as Internet- 31 Drafts. 33 Internet-Drafts are draft documents valid for a maximum of six 34 months and may be updated, replaced, or obsoleted by other documents 35 at any time. It is inappropriate to use Internet-Drafts as reference 36 material or to cite them other than as "work in progress." 38 The list of current Internet-Drafts can be accessed at 39 http://www.ietf.org/ietf/1id-abstracts.txt. 41 The list of Internet-Draft Shadow Directories can be accessed at 42 http://www.ietf.org/shadow.html. 44 This Internet-Draft will expire on July 21, 2017. 46 Copyright Notice 48 Copyright (c) 2016 IETF Trust and the persons identified as the 49 document authors. All rights reserved. 51 This document is subject to BCP 78 and the IETF Trust's Legal 52 Provisions Relating to IETF Documents 53 (http://trustee.ietf.org/license-info) in effect on the date of 54 publication of this document. Please review these documents 55 carefully, as they describe your rights and restrictions with 56 respect to this document. Code Components extracted from this 57 document must include Simplified BSD License text as described in 58 Section 4.e of the Trust Legal Provisions and are provided without 59 warranty as described in the Simplified BSD License. 61 Table of Contents 63 1. Introduction...................................................3 64 1.1. Terminology...............................................4 65 1.2. NVO3 Background...........................................5 66 2. DC with Large Number of Virtual Networks.......................6 67 3. DC NVO3 virtual network and External Network Interconnection...6 68 3.1. DC NVO3 virtual network Access via the Internet...........7 69 3.2. DC NVO3 virtual network and SP WAN VPN Interconnection....8 70 4. DC Applications Using NVO3.....................................9 71 4.1. Supporting Multiple Technologies..........................9 72 4.2. DC Applications Spanning Multiple Physical Zones.........10 73 4.3. Virtual Data Center (vDC)................................10 74 5. Summary.......................................................12 75 6. Security Considerations.......................................12 76 7. IANA Considerations...........................................13 77 8. Informative References........................................13 78 Contributors.....................................................14 79 Acknowledgements.................................................14 80 Authors' Addresses...............................................15 82 1. Introduction 84 Server virtualization has changed the Information Technology (IT) 85 industry in terms of the efficiency, cost, and speed of providing 86 new applications and/or services such as cloud applications. However 87 traditional data center (DC) networks have limits in supporting 88 cloud applications and multi tenant networks [RFC7364]. The goals of 89 data center network virtualization overlay (NVO3) networks are to 90 decouple the communication among tenant systems from DC physical 91 infrastructure networks and to allow one physical network 92 infrastructure to: 94 o Carry many NVO3 virtual networks and isolate the traffic of 95 different NVO3 virtual networks on a physical network. 97 o Provide independent address space in individual NVO3 virtual 98 network such as MAC and IP. 100 o Support flexible Virtual Machines (VM) and/or workload placement 101 including the ability to move them from one server to another 102 without requiring VM address changes and physical infrastructure 103 network configuration changes, and the ability to perform a "hot 104 move" with no disruption to the live application running on those 105 VMs. 107 These characteristics of NVO3 virtual networks help address the 108 issues that cloud applications face in data centers [RFC7364]. 110 Hosts in one NVO3 virtual network may communicate with hosts in 111 another NVO3 virtual network that is carried by the same physical 112 network, or different physical network, via a gateway. The use case 113 examples for the latter are: 1) DCs that migrate toward an NVO3 114 solution will be done in steps, where a portion of tenant systems in 115 a VN are on virtualized servers while others exist on a LAN. 2) many 116 DC applications serve to Internet users who are on different 117 physical networks; 3) some applications are CPU bound, such as Big 118 Data analytics, and may not run on virtualized resources. The inter- 119 VN policies are usually enforced by the gateway. 121 This document describes general NVO3 virtual network use cases that 122 apply to various data centers. The use cases described here 123 represent DC provider's interests and vision for their cloud 124 services. The document groups the use cases into three categories 125 from simple to sophiscated in terms of implementation. However the 126 implementation details of these use cases are outside the scope of 127 this document. These three categories are highlighted below: 129 o Basic NVO3 virtual networks (Section 2). All Tenant Systems (TS) 130 in the network are located within the same DC. The individual 131 networks can be either Layer 2 (L2) or Layer 3 (L3). The number 132 of NVO3 virtual networks in a DC is much larger than the number 133 that traditional VLAN based virtual networks [IEEE 802.1Q] can 134 support. 136 o A virtual network that spans across multiple Data Centers and/or 137 to customer premises where NVO3 virtual networks are constructed 138 and interconnect other virtual or physical networks outside the 139 data center. An enterprise customer may use a traditional 140 carrier-grade VPN or an IPsec tunnel over the Internet to 141 communicate with its systems in the DC. This is described in 142 Section 3. 144 o DC applications or services require an advanced network that 145 contains several NVO3 virtual networks that are interconnected by 146 gateways. Three scenarios are described in Section 4. (1) 147 supporting multiple technologies; (2) constructing several 148 virtual networks as a tenant network; (3) applying NVO3 to a 149 virtual Data Center (vDC). 151 The document uses the architecture reference model defined in 152 [RFC7365] to describe the use cases. 154 1.1. Terminology 156 This document uses the terminology defined in [RFC7365] and 157 [RFC4364]. Some additional terms used in the document are listed 158 here. 160 ASBR: Autonomous System Boarder Routers (ASBR) 162 DMZ: Demilitarized Zone. A computer or small sub-network that sits 163 between a more trusted internal network, such as a corporate private 164 LAN, and an un-trusted or less trusted external network, such as the 165 public Internet. 167 DNS: Domain Name Service [RFC1035] 169 DC Operator: An entity that is responsible for constructing and 170 managing all resources in data centers, including, but not limited 171 to, compute, storage, networking, etc. 173 DC Provider: An entity that uses its DC infrastructure to offer 174 services to its customers. 176 NAT: Network Address Translation [RFC3022] 178 vGW: virtual Gateway; a gateway component used for an NVO3 virtual 179 network to interconnect with another virtual/physical network. 181 NVO3 virtual network: a virtual network that is implemented based 182 NVO3 architecture [NVO3-ARCH]. 184 PE: Provider Edge 186 SP: Service Provider 188 TS: A TS can be a physical server/device or a virtual machine (VM) 189 on a server, i.e., end-device [RFC7365]. 191 VRF-LITE: Virtual Routing and Forwarding - LITE [VRF-LITE] 193 VN: NVO3 virtual network. 195 WAN VPN: Wide Area Network Virtual Private Network [RFC4364] 196 [RFC7432] 198 1.2. NVO3 Background 200 An NVO3 virtual network is a virtual network in a DC that is 201 implemented based on the NV03 architecture [RFC8014]. This 202 architecture is often referred to as an overlay architecture. The 203 traffic carried by an NVO3 virtual network is encapsulated at a 204 Network Virtual Edge (NVE) [RFC8014] and carried by a tunnel to 205 another NVE where the traffic is decapsulated and sent to a 206 destination Tenant System (TS). The NVO3 architecture decouples NVO3 207 virtual networks from the DC physical network configuration. The 208 architecture uses common tunnels to carry NVO3 traffic that belongs 209 to multiple NVO3 virtual networks. 211 An NVO3 virtual network may be an L2 or L3 domain. The network 212 provides switching (L2) or routing (L3) capability to support host 213 (i.e., tenant systems) communications. An NVO3 virtual network may 214 be required to carry unicast traffic and/or multicast, 215 broadcast/unknown-unicast (for L2 only) traffic from/to tenant 216 systems. There are several ways to transport NVO3 virtual network 217 BUM (Broadcast, Unknown-unicast, Multicast) traffic [NVO3MCAST]. 219 An NVO3 virtual network provides communications among Tenant Systems 220 (TS) in a DC. A TS can be a physical server/device or a virtual 221 machine (VM) on a server end-device [RFC7365]. 223 2. DC with Large Number of Virtual Networks 225 A DC provider often uses NVO3 virtual networks for internal 226 applications where each application runs on many VMs or physical 227 servers and the provider requires applications to be segregated from 228 each other. A DC may run a larger number of NVO3 virtual networks to 229 support many applications concurrently, where traditional IEEE802.1Q 230 based VLAN solution is limited to 4094 VLANs. 232 Applications running on VMs may require different quantity of 233 computing resource, which may result in computing resource shortage 234 on some servers and other servers being nearly idle. Shortage of 235 computing resource may impact application performance. DC operators 236 desire VM or workload movement for resource usage optimization. VM 237 dynamic placement and mobility results in frequent changes of the 238 binding between a TS and an NVE. The TS reachability update 239 mechanisms should take significantly less time than the typical 240 TCP/SCTP re-transmission Time-out window, so that end points' 241 TCP/SCTP connections won't be impacted by a TS becoming bound to a 242 different NVE. The capability of supporting many TSs in a virtual 243 network and many virtual networks in a DC is critical for an NVO3 244 solution. 246 When NVO3 virtual networks segregate VMs belonging to different 247 applications, DC operators can independently assign MAC and/or IP 248 address space to each virtual network. This addressing is more 249 flexible than requiring all hosts in all NVO3 virtual networks to 250 share one address space. In contrast, typical use of IEEE 802.1Q 251 VLANs requires a single common MAC address space. 253 3. DC NVO3 virtual network and External Network Interconnection 255 Many customers (enterprises or individuals) who utilize a DC 256 provider's compute and storage resources to run their applications 257 need to access their systems hosted in a DC through Internet or 258 Service Providers' Wide Area Networks (WAN). A DC provider can 259 construct a NVO3 virtual network that provides connectivity to all 260 the resources designated for a customer and allows the customer to 261 access the resources via a virtual gateway (vGW). WAN connectivity 262 to the virtual gateway can be provided by VPN technologies such as 263 IPsec VPNs [RFC4301] and BGP/MPLS IP VPNs [RFC 4364]. 265 If a virtual network spans multiple DC sites, one design using NVO3 266 is to allow the network to seamlessly span the sites without DC 267 gateway routers' termination. In this case, the tunnel between a 268 pair of NVEs can be carried within other intermediate tunnels over 269 the Internet or other WANs, or an intra-DC tunnel and inter DC 270 tunnel(s) can be stitched together to form an end-to-end tunnel 271 between the pair of NVEs that are in different DC sites. Both cases 272 will form one NVO3 virtual network across multiple DC sites. 274 Two use cases are described in the following sections. 276 3.1. DC NVO3 virtual network Access via the Internet 278 A customer can connect to an NVO3 virtual network via the Internet 279 in a secure way. Figure 1 illustrates an example of this case. The 280 NVO3 virtual network has an instance at NVE1 and NVE2 and the two 281 NVEs are connected via an IP tunnel in the Data Center. A set of 282 tenant systems are attached to NVE1 on a server. NVE2 resides on a 283 DC Gateway device. NVE2 terminates the tunnel and uses the VNID on 284 the packet to pass the packet to the corresponding vGW entity on the 285 DC GW (the vGW is the default gateway for the virtual network). A 286 customer can access their systems, i.e., TS1 or TSn, in the DC via 287 the Internet by using an IPsec tunnel [RFC4301]. The IPsec tunnel is 288 configured between the vGW and the customer gateway at the customer 289 site. Either a static route or Interior Border Gateway Protocol 290 (iBGP) may be used for prefix advertisement. The vGW provides IPsec 291 functionality such as authentication scheme and encryption; iBGP 292 protocol traffic is carried within the IPsec tunnel. Some vGW 293 features are listed below: 295 o The vGW maintains the TS/NVE mappings and advertises the TS 296 prefix to the customer via static route or iBGP. 298 o Some vGW functions such as firewall and load balancer can be 299 performed by locally attached network appliance devices. 301 o If the NVO3 virtual network uses different address space than 302 external users, then the vGW needs to provide the NAT function. 304 o More than one IPsec tunnel can be configured for redundancy. 306 o The vGW can be implemented on a server or VM. In this case, IP 307 tunnels or IPsec tunnels can be used over the DC infrastructure. 309 o DC operators need to construct a vGW for each customer. 311 Server+---------------+ 312 | TS1 TSn | 313 | |...| | 314 | +-+---+-+ | Customer Site 315 | | NVE1 | | +-----+ 316 | +---+---+ | | GW | 317 +------+--------+ +--+--+ 318 | * 319 L3 Tunnel * 320 | * 321 DC GW +------+---------+ .--. .--. 322 | +---+---+ | ( '* '.--. 323 | | NVE2 | | .-.' * ) 324 | +---+---+ | ( * Internet ) 325 | +---+---+. | ( * / 326 | | vGW | * * * * * * * * '-' '-' 327 | +-------+ | | IPsec \../ \.--/' 328 | +--------+ | Tunnel 329 +----------------+ 331 DC Provider Site 333 Figure 1 - DC Virtual Network Access via the Internet 335 3.2. DC NVO3 virtual network and SP WAN VPN Interconnection 337 In this case, an Enterprise customer wants to use a Service Provider 338 (SP) WAN VPN [RFC4364] [RFC7432] to interconnect its sites with an 339 NVO3 virtual network in a DC site. The Service Provider constructs a 340 VPN for the enterprise customer. Each enterprise site peers with an 341 SP PE. The DC Provider and VPN Service Provider can build an NVO3 342 virtual network and a WAN VPN independently, and then interconnect 343 them via a local link, or a tunnel between the DC GW and WAN 344 Provider Edge (PE) devices. The control plane interconnection 345 options between the DC and WAN are described in [RFC4364]. Using the 346 option A specified in [RFC4364] with VRF-LITE [VRF-LITE], both 347 Autonomous System Boarder Routers (ASBR), i.e., DC GW and SP PE, 348 maintain a routing/forwarding table (VRF). Using the option B 349 specified in [RFC4364], the DC ASBR and SP ASBR do not maintain the 350 VRF table; they only maintain the NVO3 virtual network and VPN 351 identifier mappings, i.e., label mapping, and swap the label on the 352 packets in the forwarding process. Both option A and B allow the 353 NVO3 virtual network and VPN using their own identifiers and two 354 identifiers are mapped at DC GW. With the option C in [RFC4364], the 355 VN and VPN use the same identifier and both ASBRs perform the tunnel 356 stitching, i.e., tunnel segment mapping. Each option has pros/cons 357 [RFC4364] and has been deployed in SP networks depending on the 358 application requirements. BGP is used in these options for route 359 distribution between DCs and SP WANs. Note that if the DC is the 360 SP's Data Center, the DC GW and SP PE in this case can be merged 361 into one device that performs the interworking of the VN and VPN 362 within an AS. 364 These solutions allow the enterprise networks to communicate with 365 the tenant systems attached to the NVO3 virtual network in the DC 366 without interfering with the DC provider's underlying physical 367 networks and other NVO3 virtual networks in the DC. The enterprise 368 can use its own address space in the NVO3 virtual network. The DC 369 provider can manage which VM and storage elements attach to the NVO3 370 virtual network. The enterprise customer manages which applications 371 run on the VMs without knowing the location of the VMs in the DC. 372 (See Section 4 for more) 374 Furthermore, in this use case, the DC operator can move the VMs 375 assigned to the enterprise from one sever to another in the DC 376 without the enterprise customer being aware, i.e., with no impact on 377 the enterprise's 'live' applications. Such advanced technologies 378 bring DC providers great benefits in offering cloud services, but 379 add some requirements for NVO3 [RFC7364] as well. 381 4. DC Applications Using NVO3 383 NVO3 technology provides DC operators with the flexibility in 384 designing and deploying different applications in an end-to-end 385 virtualization overlay environment. The operators no longer need to 386 worry about the constraints of the DC physical network configuration 387 when creating VMs and configuring a network to connect them. A DC 388 provider may use NVO3 in various ways, in conjunction with other 389 physical networks and/or virtual networks in the DC. This section 390 highlights some use cases for this goal. 392 4.1. Supporting Multiple Technologies 394 Servers deployed in a large data center are often installed at 395 different times, and may have different capabilities/features. Some 396 servers may be virtualized, while others may not; some may be 397 equipped with virtual switches, while others may not. For the 398 servers equipped with Hypervisor-based virtual switches, some may 399 support a standardized NVO3 encapsulation, some may not support any 400 encapsulation, and some may support a documented encapsulation 401 protocol (e.g. VxLAN [RFC7348], NVGRE [RFC7637]) or proprietary 402 encapsulations. To construct a tenant network among these servers 403 and the ToR switches, operators can construct one traditional VLAN 404 network and two virtual networks where one uses VxLAN encapsulation 405 and the other uses NVGRE, and interconnect these three networks via 406 a gateway or virtual GW. The GW performs packet 407 encapsulation/decapsulation translation between the networks. 409 Another case is that some software of a tenant has high CPU and 410 memory consumption, which only makes a sense to run on standalone 411 servers; other software of the tenant may be good to run on VMs. 412 However provider DC infrastructure is configured to use NVO3 to 413 connect VMs and VLAN [IEEE802.1Q] to physical servers. The tenant 414 network requires interworking between NVO3 and traditional VLAN. 416 4.2. DC Applications Spanning Multiple Physical Zones 418 A DC can be partitioned into multiple physical zones, with each zone 419 having different access permissions and runs different applications. 420 For example, a three-tier zone design has a front zone (Web tier) 421 with Web applications, a mid zone (application tier) where service 422 applications such as credit payment or ticket booking run, and a 423 back zone (database tier) with Data. External users are only able to 424 communicate with the Web application in the front zone; the back 425 zone can only receive traffic from the application zone. In this 426 case, communications between the zones must pass through one or more 427 security functions in a physical DMZ zone. Each zone can be 428 implemented by one NVO3 virtual network and the security functions 429 in DMZ zone can be used to between two NVO3 virtual networks, i.e., 430 two zones. If network functions (NF), especially the security 431 functions in the physical DMZ can't process encapsulated NVO3 432 traffic, the NVO3 tunnels have to be terminated for the NF to 433 perform its processing on the application traffic. 435 4.3. Virtual Data Center (vDC) 437 An enterprise data center today may deploy routers, switches, and 438 network appliance devices to construct its internal network, DMZ, 439 and external network access; it may have many servers and storage 440 running various applications. With NVO3 technology, a DC Provider 441 can construct a virtual Data Center (vDC) over its physical DC 442 infrastructure and offer a virtual Data Center service to enterprise 443 customers. A vDC at the DC Provider site provides the same 444 capability as the physical DC at a customer site. A customer manages 445 its own applications running in its vDC. A DC Provider can further 446 offer different network service functions to the customer. The 447 network service functions may include firewall, DNS, load balancer, 448 gateway, etc. 450 Figure 2 below illustrates one such scenario at the service 451 abstraction level. In this example, the vDC contains several L2 VNs 452 (L2VNx, L2VNy, L2VNz) to group the tenant systems together on a per- 453 application basis, and one L3 VN (L3VNa) for the internal routing. A 454 network firewall and gateway runs on a VM or server that connects to 455 L3VNa and is used for inbound and outbound traffic processing. A 456 load balancer (LB) is used in L2VNx. A VPN is also built between the 457 gateway and enterprise router. An Enterprise customer runs 458 Web/Mail/Voice applications on VMs within the vDC. The users at the 459 Enterprise site access the applications running in the vDC via the 460 VPN; Internet users access these applications via the 461 gateway/firewall at the provider DC site. 463 Internet ^ Internet 464 | 465 ^ +--+---+ 466 | | GW | 467 | +--+---+ 468 | | 469 +-------+--------+ +--+---+ 470 |Firewall/Gateway+--- VPN-----+router| 471 +-------+--------+ +-+--+-+ 472 | | | 473 ...+.... |..| 474 +-------: L3 VNa :---------+ LANs 475 +-+-+ ........ | 476 |LB | | | Enterprise Site 477 +-+-+ | | 478 ...+... ...+... ...+... 479 : L2VNx : : L2VNy : : L2VNz : 480 ....... ....... ....... 481 |..| |..| |..| 482 | | | | | | 483 Web App. Mail App. VoIP App. 485 Provider DC Site 487 Figure 2 - Virtual Data Center Abstraction View 489 The enterprise customer decides which applications should be 490 accessible only via the intranet and which should be assessable via 491 both the intranet and Internet, and configures the proper security 492 policy and gateway function at the firewall/gateway. Furthermore, an 493 enterprise customer may want multi-zones in a vDC (See section 4.2) 494 for the security and/or the ability to set different QoS levels for 495 the different applications. 497 The vDC use case requires an NVO3 solution to provide DC operators 498 with an easy and quick way to create an NVO3 virtual network and 499 NVEs for any vDC design, to allocate TSs and assign TSs to the 500 corresponding NVO3 virtual network, and to illustrate vDC topology 501 and manage/configure individual elements in the vDC in a secure way. 503 5. Summary 505 This document describes some general NVO3 use cases in DCs. The 506 combination of these cases will give operators the flexibility and 507 capability to design more sophisticated support for various cloud 508 applications. 510 DC services may vary, NVO3 virtual networks make it possible to 511 scale a large number of virtual networks in DC and ensure the 512 network infrastructure not impacted by the number of VMs and dynamic 513 workload changes in DC. 515 NVO3 uses tunnel techniques to deliver NVO3 traffic over DC physical 516 infrastructure network. A tunnel encapsulation protocol is 517 necessary. An NVO3 tunnel may in turn be tunneled over other 518 intermediate tunnels over the Internet or other WANs. 520 An NVO3 virtual network in a DC may be accessed by external users in 521 a secure way. Many existing technologies can help achieve this. 523 6. Security Considerations 525 Security is a concern. DC operators need to provide a tenant with a 526 secured virtual network, which means one tenant's traffic is 527 isolated from other tenants' traffic and is not leaked to the 528 underlay networks. Tenants are vulnerable to observation and data 529 modification/injection by the operator of the underlay and should 530 only use operators they trust. DC operators also need to prevent a 531 tenant application attacking their underlay DC network; further, 532 they need to protect a tenant application attacking another tenant 533 application via the DC infrastructure network. For example, a tenant 534 application attempts to generate a large volume of traffic to 535 overload the DC's underlying network. This can be done by limiting 536 the bandwidth of such communications. 538 7. IANA Considerations 540 This document does not request any action from IANA. 542 8. Informative References 544 [IEEE802.1Q] IEEE, "IEEE Standard for Local and metropolitan area 545 networks -- Media Access Control (MAC) Bridges and Virtual 546 Bridged Local Area", IEEE Std 802.1Q, 2011. 548 [NIST] National Institute of Standards and Technology, "The NIST 549 Definition of Cloud Computing", SP 880-145, September, 550 2011. 552 [NVO3MCAST] Ghanwani, A., Dunbar, L., et al, "A Framework for 553 Multicast in Network Virtualization Overlays", draft-ietf- 554 nvo3-mcast-framework-05, work in progress. 556 [RFC1035] Mockapetris, P., "DOMAIN NAMES - Implementation and 557 Specification", RFC1035, November 1987. 559 [RFC3022] Srisuresh, P. and Egevang, K., "Traditional IP Network 560 Address Translator (Traditional NAT)", RFC3022, January 561 2001. 563 [RFC4301] Kent, S., "Security Architecture for the Internet 564 Protocol", rfc4301, December 2005 566 [RFC4364] Rosen, E. and Y. Rekhter, "BGP/MPLS IP Virtual Private 567 Networks (VPNs)", RFC 4364, February 2006. 569 [RFC7348] Mahalingam, M., Dutt, D., et al, "Virtual eXtensible Local 570 Area Network (VXLAN): A Framework for Overlaying 571 Virtualized Layer 2 Networks over Layer 3 Networks", 572 RFC7348 August 2014. 574 [RFC7364] Narten, T., et al "Problem Statement: Overlays for Network 575 Virtualization", RFC7364, October 2014. 577 [RFC7365] Lasserre, M., Motin, T., et al, "Framework for DC Network 578 Virtualization", RFC7365, October 2014. 580 [RFC7432] Sajassi, A., Ed., Aggarwal, R., Bitar, N., Isaac, A. and 581 J. Uttaro, "BGP MPLS Based Ethernet VPN", RFC7432, 582 February 2015 584 [RFC7637] Garg, P., and Wang, Y., "NVGRE: Network Virtualization 585 using Generic Routing Encapsulation", RFC7637, Sept. 2015. 587 [RFC8014] Black, D., et al, "An Architecture for Overlay Networks 588 (NVO3)", rfc8014, January 2017. 590 [VRF-LITE] Cisco, "Configuring VRF-lite", http://www.cisco.com 592 Contributors 594 David Black 595 Dell EMC 596 176 South Street 597 Hopkinton, MA 01748 598 David.Black@dell.com 600 Vinay Bannai 601 PayPal 602 2211 N. First St, 603 San Jose, CA 95131 604 Phone: +1-408-967-7784 605 Email: vbannai@paypal.com 607 Ram Krishnan 608 Brocade Communications 609 San Jose, CA 95134 610 Phone: +1-408-406-7890 611 Email: ramk@brocade.com 613 Kieran Milne 614 Juniper Networks 615 1133 Innovation Way 616 Sunnyvale, CA 94089 617 Phone: +1-408-745-2000 618 Email: kmilne@juniper.net 620 Acknowledgements 622 Authors like to thank Sue Hares, Young Lee, David Black, Pedro 623 Marques, Mike McBride, David McDysan, Randy Bush, Uma Chunduri, Eric 624 Gray, David Allan, Joe Touch, Olufemi Komolafe, Matthew Bocci, and 625 Alia Atlas for the review, comments, and suggestions. 627 Authors' Addresses 629 Lucy Yong 630 Huawei Technologies 632 Phone: +1-918-808-1918 633 Email: lucy.yong@huawei.com 635 Linda Dunbar 636 Huawei Technologies, 637 5340 Legacy Dr. 638 Plano, TX 75025 US 640 Phone: +1-469-277-5840 641 Email: linda.dunbar@huawei.com 643 Mehmet Toy 644 Verizon 646 E-mail : mtoy054@yahoo.com 648 Aldrin Isaac 649 Juniper Networks 650 E-mail: aldrin.isaac@gmail.com 652 Vishwas Manral 654 Email: vishwas@ionosnetworks.com