idnits 2.17.00 (12 Aug 2021) /tmp/idnits38006/draft-ietf-nvo3-use-case-08.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (June 3, 2016) is 2178 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'RFC826' is mentioned on line 423, but not defined == Outdated reference: draft-ietf-nvo3-hpvr2nve-cp-req has been published as RFC 8394 == Outdated reference: draft-ietf-nvo3-arch has been published as RFC 8014 -- Obsolete informational reference (is this intentional?): RFC 1631 (Obsoleted by RFC 3022) Summary: 0 errors (**), 0 flaws (~~), 4 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group L. Yong 2 Internet Draft Huawei 3 Category: Informational M. Toy 4 Comcast 5 A. Isaac 6 Bloomberg 7 V. Manral 8 Ionos Networks 9 L. Dunbar 10 Huawei 12 Expires: December 2016 June 3, 2016 14 Use Cases for Data Center Network Virtualization Overlays 16 draft-ietf-nvo3-use-case-08 18 Abstract 20 This document describes Data Center (DC) Network Virtualization over 21 Layer 3 (NVO3) use cases that can be deployed in various data 22 centers and serve different applications. 24 Status of this Memo 26 This Internet-Draft is submitted to IETF in full conformance with 27 the provisions of BCP 78 and BCP 79. 29 Internet-Drafts are working documents of the Internet Engineering 30 Task Force (IETF), its areas, and its working groups. Note that 31 other groups may also distribute working documents as Internet- 32 Drafts. 34 Internet-Drafts are draft documents valid for a maximum of six 35 months and may be updated, replaced, or obsoleted by other documents 36 at any time. It is inappropriate to use Internet-Drafts as reference 37 material or to cite them other than as "work in progress." 39 The list of current Internet-Drafts can be accessed at 40 http://www.ietf.org/ietf/1id-abstracts.txt. 42 The list of Internet-Draft Shadow Directories can be accessed at 43 http://www.ietf.org/shadow.html. 45 This Internet-Draft will expire on December 3, 2016. 47 Copyright Notice 49 Copyright (c) 2015 IETF Trust and the persons identified as the 50 document authors. All rights reserved. 52 This document is subject to BCP 78 and the IETF Trust's Legal 53 Provisions Relating to IETF Documents 54 (http://trustee.ietf.org/license-info) in effect on the date of 55 publication of this document. Please review these documents 56 carefully, as they describe your rights and restrictions with 57 respect to this document. Code Components extracted from this 58 document must include Simplified BSD License text as described in 59 Section 4.e of the Trust Legal Provisions and are provided without 60 warranty as described in the Simplified BSD License. 62 Table of Contents 64 1. Introduction...................................................3 65 1.1. Terminology...............................................4 66 2. Basic Virtual Networks in a Data Center........................4 67 3. DC Virtual Network and External Network Interconnection........6 68 3.1. DC Virtual Network Access via the Internet................6 69 3.2. DC VN and SP WAN VPN Interconnection......................7 70 4. DC Applications Using NVO3.....................................8 71 4.1. Supporting Multiple Technologies and Applications.........9 72 4.2. Tenant Network with Multiple Subnets......................9 73 4.3. Virtualized Data Center (vDC)............................11 74 5. Summary.......................................................12 75 6. Security Considerations.......................................13 76 7. IANA Considerations...........................................13 77 8. References....................................................13 78 8.1. Normative References.....................................13 79 8.2. Informative References...................................13 80 Contributors.....................................................14 81 Acknowledgements.................................................15 82 Authors' Addresses...............................................15 84 1. Introduction 86 Server Virtualization has changed the Information Technology (IT) 87 industry in terms of the efficiency, cost, and speed of providing 88 new applications and/or services such as cloud applications. However 89 traditional Data Center (DC) networks have some limits in supporting 90 cloud applications and multi tenant networks [RFC7364]. The goal of 91 Network Virtualization Overlays in the DC is to decouple the 92 communication among tenant systems from DC physical infrastructure 93 networks and to allow one physical network infrastructure to provide: 95 o Multi-tenant virtual networks and traffic isolation among the 96 virtual networks over the same physical network. 98 o Independent address spaces in individual virtual networks such as 99 MAC, IP, TCP/UDP etc. 101 o Flexible Virtual Machines (VM) and/or workload placement 102 including the ability to move them from one server to another 103 without requiring VM address and configuration changes, and the 104 ability to perform a "hot move" with no disruption to the live 105 application running on VMs. 107 These characteristics of NVO3 help address the issues that cloud 108 applications face in Data Centers [RFC7364]. 110 An NVO3 network may interconnect with another NVO3 virtual network, 111 or another physical network (i.e., not the physical network that the 112 NVO3 network is over), via a gateway. The use case examples for the 113 latter are: 1) DCs that migrate toward an NVO3 solution will be done 114 in steps, where a portion of tenant systems in a VN is on 115 virtualized servers while others exist on a LAN. 2) many DC 116 applications serve to Internet users who are on physical networks; 3) 117 some applications are CPU bound, such as Big Data analytics, and may 118 not run on virtualized resources. Some inter-VN policies can be 119 enforced at the gateway. 121 This document describes general NVO3 use cases that apply to various 122 data centers. Three types of the use cases described in this 123 document are: 125 o Basic NVO3 virtual networks in a DC (Section 2). All Tenant 126 Systems (TS) in the virtual network are located within the same 127 DC. The individual virtual networks can be either Layer 2 (L2) or 128 Layer 3 (L3). The number of NVO3 virtual networks in a DC is much 129 higher than what traditional VLAN based virtual networks [IEEE 130 802.1Q] can support. This case is often referred as to the DC 131 East-West traffic. 133 o Virtual networks that span across multiple Data Centers and/or to 134 customer premises, i.e., an NVO3 virtual network where some 135 tenant systems in a DC attach to interconnects another virtual or 136 physical network outside the data center. An enterprise customer 137 may use a traditional carrier VPN or an IPsec tunnel over the 138 Internet to communicate with its systems in the DC. This is 139 described in Section 3. 141 o DC applications or services require an advanced network that 142 contains several NVO3 virtual networks that are interconnected by 143 the gateways. Three scenarios are described in Section 4: 1) 144 using NVO3 and other network technologies to build a tenant 145 network; 2) constructing several virtual networks as a tenant 146 network; 3) applying NVO3 to a virtualized DC (vDC). 148 The document uses the architecture reference model defined in 149 [RFC7365] to describe the use cases. 151 1.1. Terminology 153 This document uses the terminologies defined in [RFC7365] and 154 [RFC4364]. Some additional terms used in the document are listed 155 here. 157 DMZ: Demilitarized Zone. A computer or small sub-network that sits 158 between a trusted internal network, such as a corporate private LAN, 159 and an un-trusted external network, such as the public Internet. 161 DNS: Domain Name Service [RFC1035] 163 NAT: Network Address Translation [RFC1631] 165 Note that a virtual network in this document refers to an NVO3 166 virtual network in a DC [RFC7365]. 168 2. Basic Virtual Networks in a Data Center 170 A virtual network in a DC enables communications among Tenant 171 Systems (TS). A TS can be a physical server/device or a virtual 172 machine (VM) on a server, i.e., end-device [RFC7365]. A Network 173 Virtual Edge (NVE) can be co-located with a TS, i.e., on the same 174 end-device, or reside on a different device, e.g., a top of rack 175 switch (ToR). A virtual network has a virtual network identifier 176 (can be globally unique or locally significant at NVEs). 178 Tenant Systems attached to the same NVE may belong to the same or 179 different virtual networks. An NVE provides tenant traffic 180 forwarding/encapsulation and obtains tenant systems reachability 181 information from a Network Virtualization Authority (NVA)[NVO3ARCH]. 182 DC operators can construct multiple separate virtual networks, and 183 provide each with own address space. 185 Network Virtualization Overlay in this context means that a virtual 186 network is implemented with an overlay technology, i.e., within a DC 187 that has IP infrastructure, tenant traffic is encapsulated at its 188 local NVE and carried by a tunnel to another NVE where the packet is 189 decapsulated and sent to a target tenant system. This architecture 190 decouples tenant system address space and configuration from the 191 infrastructure's, which provides great flexibility for VM placement 192 and mobility. It also means that the transit nodes in the 193 infrastructure are not aware of the existence of the virtual 194 networks and tenant systems attached to the virtual networks. The 195 tunneled packets are carried as regular IP packets and are sent to 196 NVEs. One tunnel may carry the traffic belonging to multiple virtual 197 networks; a virtual network identifier is used for traffic 198 demultiplexing. A tunnel encapsulation protocol is necessary for NVE 199 to encapsulate the packets from Tenant Systems and encode other 200 information on the tunneled packets to support NVO3 implementation. 202 A virtual network implemented by NVO3 may be an L2 or L3 domain. The 203 virtual network can carry unicast traffic and/or multicast, 204 broadcast/unknown (for L2 only) traffic from/to tenant systems. 205 There are several ways to transport virtual network BUM traffic 206 [NVO3MCAST]. 208 It is worth mentioning two distinct cases regarding to NVE location. 209 The first is where TSs and an NVE are co-located on a single end 210 host/device, which means that the NVE can be aware of the TS's state 211 at any time via an internal API. The second is where TSs and an NVE 212 are not co-located, with the NVE residing on a network device; in 213 this case, a protocol is necessary to allow the NVE to be aware of 214 the TS's state [NVO3HYVR2NVE]. 216 One virtual network can provide connectivity to many TSs that attach 217 to many different NVEs in a DC. TS dynamic placement and mobility 218 results in frequent changes of the binding between a TS and an NVE. 220 The TS reachability update mechanisms need be fast enough so that 221 the updates do not cause any communication disruption/interruption. 222 The capability of supporting many TSs in a virtual network and many 223 more virtual networks in a DC is critical for the NVO3 solution. 225 If a virtual network spans across multiple DC sites, one design is 226 to allow the network to seamlessly span across the sites without DC 227 gateway routers' termination. In this case, the tunnel between a 228 pair of NVEs can be carried within other intermediate tunnels over 229 the Internet or other WANs, or the intra DC and inter DC tunnels can 230 be stitched together to form a tunnel between the pair of NVEs that 231 are in different DC sites. Both cases will form one virtual network 232 across multiple DC sites. 234 3. DC Virtual Network and External Network Interconnection 236 Many customers (an enterprise or individuals) who utilize a DC 237 provider's compute and storage resources to run their applications 238 need to access their systems hosted in a DC through Internet or 239 Service Providers' Wide Area Networks (WAN). A DC provider can 240 construct a virtual network that provides connectivity to all the 241 resources designated for a customer and allows the customer to 242 access the resources via a virtual gateway (vGW). This, in turn, 243 becomes the case of interconnecting a DC virtual network and the 244 network at customer site(s) via the Internet or WANs. Two use cases 245 are described here. 247 3.1. DC Virtual Network Access via the Internet 249 A customer can connect to a DC virtual network via the Internet in a 250 secure way. Figure 1 illustrates this case. The DC virtual network 251 has an instance at NVE1 and NVE2 and the two NVEs are connected via 252 an IP tunnel in the Data Center. A set of tenant systems are 253 attached to NVE1 on a server. NVE2 resides on a DC Gateway device. 254 NVE2 terminates the tunnel and uses the VNID on the packet to pass 255 the packet to the corresponding vGW entity on the DC GW (the vGW is 256 the default gateway for the virtual network). A customer can access 257 their systems, i.e., TS1 or TSn, in the DC via the Internet by using 258 an IPsec tunnel [RFC4301]. The IPsec tunnel is configured between 259 the vGW and the customer gateway at the customer site. Either a 260 static route or iBGP may be used for prefix advertisement. The vGW 261 provides IPsec functionality such as authentication scheme and 262 encryption; iBGP protocol traffic is carried within the IPsec tunnel. 263 Some vGW features are listed below: 265 o The vGW maintains the TS/NVE mappings and advertises the TS 266 prefix to the customer via static route or iBGP. 268 o Some vGW functions such as firewall and load balancer can be 269 performed by locally attached network appliance devices. 271 o If the virtual network in the DC uses different address space 272 than external users, then the vGW needs to provide the NAT 273 function. 275 o More than one IPsec tunnel can be configured for redundancy. 277 o The vGW can be implemented on a server or VM. In this case, IP 278 tunnels or IPsec tunnels can be used over the DC infrastructure. 280 o DC operators need to construct a vGW for each customer. 282 Server+---------------+ 283 | TS1 TSn | 284 | |...| | 285 | +-+---+-+ | Customer Site 286 | | NVE1 | | +-----+ 287 | +---+---+ | | CGW | 288 +------+--------+ +--+--+ 289 | * 290 L3 Tunnel * 291 | * 292 DC GW +------+---------+ .--. .--. 293 | +---+---+ | ( '* '.--. 294 | | NVE2 | | .-.' * ) 295 | +---+---+ | ( * Internet ) 296 | +---+---+. | ( * / 297 | | vGW | * * * * * * * * '-' '-' 298 | +-------+ | | IPsec \../ \.--/' 299 | +--------+ | Tunnel 300 +----------------+ 302 DC Provider Site 304 Figure 1 - DC Virtual Network Access via the Internet 306 3.2. DC VN and SP WAN VPN Interconnection 308 In this case, an Enterprise customer wants to use a Service Provider 309 (SP) WAN VPN [RFC4364] [RFC7432] to interconnect its sites with a 310 virtual network in a DC site. The Service Provider constructs a VPN 311 for the enterprise customer. Each enterprise site peers with an SP 312 PE. The DC Provider and VPN Service Provider can build a DC virtual 313 network (VN) and VPN independently, and then interconnect them via a 314 local link, or a tunnel between the DC GW and WAN PE devices. The 315 control plane interconnection options between the DC and WAN are 316 described in RFC4364 [RFC4364]. Using Option A with VRF-LITE [VRF- 317 LITE], both ASBRs, i.e., DC GW and SP PE, maintain a 318 routing/forwarding table (VRF). Using Option B, the DC ASBR and SP 319 ASBR do not maintain the VRF table; they only maintain the VN and 320 VPN identifier mappings, i.e., label mapping, and swap the label on 321 the packets in the forwarding process. Both option A and B allow VN 322 and VPN using own identifier and two identifiers are mapped at DC GW. 323 With option C, the VN and VPN use the same identifier and both ASBRs 324 perform the tunnel stitching, i.e., tunnel segment mapping. Each 325 option has pros/cons [RFC4364] and has been deployed in SP networks 326 depending on the applications in use. BGP is used with these options 327 for route distribution between DCs and SP WANs. Note that if the DC 328 is the SP's Data Center, the DC GW and SP PE in this case can be 329 merged into one device that performs the interworking of the VN and 330 VPN within an AS. 332 The configurations above allow the enterprise networks to 333 communicate with the tenant systems attached to the VN in a DC 334 without interfering with the DC provider's underlying physical 335 networks and other virtual networks. The enterprise can use its own 336 address space in the VN. The DC provider can manage which VM and 337 storage elements attach to the VN. The enterprise customer manages 338 which applications run on the VMs in the VN without knowing the 339 location of the VMs in the DC. (See Section 4 for more) 341 Furthermore, in this use case, the DC operator can move the VMs 342 assigned to the enterprise from one sever to another in the DC 343 without the enterprise customer being aware, i.e., with no impact on 344 the enterprise's 'live' applications. Such advanced technologies 345 bring DC providers great benefits in offering cloud services, but 346 add some requirements for NVO3 [RFC7364] as well. 348 4. DC Applications Using NVO3 350 NVO3 technology provides DC operators with the flexibility in 351 designing and deploying different applications in an end-to-end 352 virtualization overlay environment. Operators no longer need to 353 worry about the constraints of the DC physical network configuration 354 when creating VMs and configuring a virtual network. A DC provider 355 may use NVO3 in various ways, in conjunction with other physical 356 networks and/or virtual networks in the DC for a reason. This 357 section highlights some use cases for this goal. 359 4.1. Supporting Multiple Technologies and Applications 361 Servers deployed in a large data center are often installed at 362 different times, and may have different capabilities/features. Some 363 servers may be virtualized, while others may not; some may be 364 equipped with virtual switches, while others may not. For the 365 servers equipped with Hypervisor-based virtual switches, some may 366 support VxLAN [RFC7348] encapsulation, some may support NVGRE 367 encapsulation [RFC7637], and some may not support any encapsulation. 368 To construct a tenant network among these servers and the ToR 369 switches, operators can construct one traditional VLAN network and 370 two virtual networks where one uses VxLAN encapsulation and the 371 other uses NVGRE, and interconnect these three networks via a 372 gateway or virtual GW. The GW performs packet 373 encapsulation/decapsulation translation between the networks. 375 A data center may be also constructed with multi-tier zones, where 376 each zone has different access permissions and runs different 377 applications. For example, the three-tier zone design has a front 378 zone (Web tier) with Web applications, a mid zone (application tier) 379 where service applications such as credit payment or ticket booking 380 run, and a back zone (database tier) with Data. External users are 381 only able to communicate with the Web application in the front zone. 382 In this case, communications between the zones must pass through the 383 security GW/firewall. One virtual network can be configured in each 384 zone and a GW can be used to interconnect two virtual networks, i.e., 385 two zones. If the virtual network in individual zones uses the 386 different implementations, the GW needs to support these 387 implementations as well. 389 4.2. Tenant Network with Multiple Subnets 391 A tenant network may contain multiple subnets. The DC physical 392 network needs to support the connectivity for many such tenant 393 networks. In some cases, the inter-subnet policies can be placed at 394 designated gateway devices. Such a design requires the inter-subnet 395 traffic to be sent to one of the gateway devices first for the 396 policy checking, which may cause traffic to "hairpin" at the gateway 397 in a DC. It is desirable for an NVE to be able to hold some policies 398 and be able to forward the inter-subnet traffic directly. To reduce 399 the burden on the NVE, a hybrid design may be deployed, i.e., an NVE 400 can perform forwarding for selected inter-subnets while the 401 designated GW performs forwarding for the rest. For example, each 402 NVE performs inter-subnet forwarding for intra-DC traffic while the 403 designated GW is used for traffic to/from a remote DC. 405 A tenant network may span across multiple Data Centers that are at 406 different locations. DC operators may configure an L2 VN within each 407 DC and an L3 VN between DCs for a tenant network. For this 408 configuration, the virtual L2/L3 gateway can be implemented on the 409 DC GW device. Figure 2 illustrates this configuration. 411 Figure 2 depicts two DC sites. Site A constructs one L2 VN, say 412 L2VNa, on NVE1, NVE2, and NVE5. NVE1 and NVE2 reside on the servers 413 which host multiple tenant systems. NVE5 resides on the DC GW device. 414 Site Z has similar configuration, with L2VNz on NVE3, NVE4, and NVE6. 415 An L3 VN, L3VNx, is configured on NVE5 at Site A and the NVE6 at 416 Site Z. An internal Virtual Interface of Routing and Bridging (VIRB) 417 is used between the L2VNI and L3VNI on NVE5 and NVE6, respectively. 418 The L2VNI requires the MAC/NVE mapping table and the L3VNI requires 419 the IP prefix/NVE mapping table. A packet arriving at NVE5 from 420 L2VNa will be decapsulated, converted into an IP packet, and then 421 encapsulated and sent to Site Z. A packet to NVE5 from L3VNx will be 422 decapsulated, converted into a MAC frame, and then encapsulated and 423 sent within Site A. The ARP protocol [RFC826] can be used to get the 424 MAC address for an IP address in the L2VNa. The policies can be 425 checked at the VIRB. 427 Note that L2VNa, L2VNz, and L3VNx in Figure 2 are NVO3 virtual 428 networks. 430 NVE5/DCGW+------------+ +-----------+ NVE6/DCGW 431 | +-----+ | '''''''''''''''' | +-----+ | 432 | |L3VNI+----+' L3VNx '+---+L3VNI| | 433 | +--+--+ | '''''''''''''''' | +--+--+ | 434 | |VIRB | | VIRB| | 435 | +--+--+ | | +--+--+ | 436 | |L2VNI| | | |L2VNI| | 437 | +--+--+ | | +--+--+ | 438 +----+-------+ +------+----+ 439 ''''|'''''''''' ''''''|''''''' 440 ' L2VNa ' ' L2VNz ' 441 NVE1/S ''/'''''''''\'' NVE2/S NVE3/S'''/'''''''\'' NVE4/S 442 +-----+---+ +----+----+ +------+--+ +----+----+ 443 | +--+--+ | | +--+--+ | | +---+-+ | | +--+--+ | 444 | |L2VNI| | | |L2VNI| | | |L2VNI| | | |L2VNI| | 445 | ++---++ | | ++---++ | | ++---++ | | ++---++ | 446 +--+---+--+ +--+---+--+ +--+---+--+ +--+---+--+ 447 |...| |...| |...| |...| 449 Tenant Systems Tenant Systems 450 DC Site A DC Site Z 452 Figure 2 - Tenant Virtual Network with Bridging/Routing 454 4.3. Virtualized Data Center (vDC) 456 An Enterprise Data Center today may deploy routers, switches, and 457 network appliance devices to construct its internal network, DMZ, 458 and external network access; it may have many servers and storage 459 running various applications. With NVO3 technology, a DC Provider 460 can construct a virtualized DC over its physical DC infrastructure 461 and offer a virtual DC service to enterprise customers. A vDC at the 462 DC Provider site provides the same capability as a physical DC at 463 the customer site. A customer manages their own applications running 464 in their vDC. A DC Provider can further offer different network 465 service functions to the customer. The network service functions may 466 include firewall, DNS, load balancer, gateway, etc. 468 Figure 3 below illustrates one such scenario. For simplicity, it 469 only shows the L3 VN or L2 VN in abstraction. In this example, the 470 DC Provider operators create several L2 VNs (L2VNx, L2VNy, L2VNz) to 471 group the tenant systems together on a per-application basis, and 472 one L3 VN (L3VNa) for the internal routing. A network firewall and 473 gateway runs on a VM or server that connects to L3VNa and is used 474 for inbound and outbound traffic processing. A load balancer (LB) is 475 used in L2VNx. A VPN is also built between the gateway and 476 enterprise router. The Enterprise customer runs Web/Mail/Voice 477 applications on VMs at the provider DC site which may be spread 478 across many servers. The users at the Enterprise site access the 479 applications running in the provider DC site via the VPN; Internet 480 users access these applications via the gateway/firewall at the 481 provider DC. 483 The Enterprise customer decides which applications should be 484 accessible only via the intranet and which should be assessable via 485 both the intranet and Internet, and configures the proper security 486 policy and gateway function at the firewall/gateway. Furthermore, an 487 enterprise customer may want multi-zones in a vDC (See section 4.1) 488 for the security and/or the ability to set different QoS levels for 489 the different applications. 491 The vDC use case requires the NVO3 solution to provide DC operators 492 with an easy and quick way to create a VN and NVEs for any vDC 493 design, to allocate TSs and assign TSs to the corresponding VN, and 494 to illustrate vDC topology and manage/configure individual elements 495 in the vDC in a secure way. 497 Internet ^ Internet 498 | 499 ^ +--+---+ 500 | | GW | 501 | +--+---+ 502 | | 503 +-------+--------+ +--+---+ 504 |Firewall/Gateway+--- VPN-----+router| 505 +-------+--------+ +-+--+-+ 506 | | | 507 ...+.... |..| 508 +-------: L3 VNa :---------+ LANs 509 +-+-+ ........ | 510 |LB | | | Enterprise Site 511 +-+-+ | | 512 ...+... ...+... ...+... 513 : L2VNx : : L2VNy : : L2VNz : 514 ....... ....... ....... 515 |..| |..| |..| 516 | | | | | | 517 Web Apps Mail Apps VoIP Apps 519 Provider DC Site 521 Figure 3 - Virtual Data Center (vDC) 523 5. Summary 525 This document describes some general and potential NVO3 use cases in 526 DCs. The combination of these cases will give operators the 527 flexibility and capability to design more sophisticated cases for 528 various cloud applications. 530 DC services may vary, from infrastructure as a service (IaaS), to 531 platform as a service (PaaS), to software as a service (SaaS). 532 In these services, NVO3 virtual networks are just a portion of such 533 services. 535 NVO3 uses tunnel techniques to deliver VN traffic over an IP network. 536 A tunnel encapsulation protocol is necessary. An NVO3 tunnel may in 537 turn be tunneled over other intermediate tunnels over the Internet 538 or other WANs. 540 An NVO3 virtual network in a DC may be accessed by external users in 541 a secure way. Many existing technologies can help achieve this. 543 NVO3 implementations may vary. Some DC operators prefer to use a 544 centralized controller to manage tenant system reachability in a 545 virtual network, while other operators prefer to use distribution 546 protocols to advertise the tenant system location, i.e., NVE 547 location. When a tenant network spans across multiple DCs and WANs, 548 each network administration domain may use different methods to 549 distribute the tenant system locations. Both control plane and data 550 plane interworking are necessary. 552 6. Security Considerations 554 Security is a concern. DC operators need to provide a tenant with a 555 secured virtual network, which means one tenant's traffic is 556 isolated from other tenants' traffic as well as from non-tenants' 557 traffic. DC operators also need to prevent against a tenant 558 application attacking their underlying DC network through the 559 tenant's virtual network; further, they need to protect against a 560 tenant application attacking another tenant application via the DC 561 infrastructure network. For example, a tenant application attempts 562 to generate a large volume of traffic to overload the DC's 563 underlying network. An NVO3 solution has to address these issues. 565 7. IANA Considerations 567 This document does not request any action from IANA. 569 8. References 571 8.1. Normative References 573 [RFC7364] Narten, T., et al "Problem Statement: Overlays for Network 574 Virtualization", RFC7364, October 2014. 576 [RFC7365] Lasserre, M., Motin, T., and et al, "Framework for DC 577 Network Virtualization", RFC7365, October 2014. 579 8.2. Informative References 581 [IEEE 802.1Q] IEEE, "IEEE Standard for Local and metropolitan area 582 networks -- Media Access Control (MAC) Bridges and Virtual 583 Bridged Local Area", IEEE Std 802.1Q, 2011. 585 [NVO3HYVR2NVE] Li, Y., et al, "Hypervisor to NVE Control Plane 586 Requirements", draft-ietf-nvo3-hpvr2nve-cp-req-01, work in 587 progress. 589 [NVO3ARCH] Black, D., et al, "An Architecture for Overlay Networks 590 (NVO3)", draft-ietf-nvo3-arch-02, work in progress. 592 [NVO3MCAST] Ghanwani, A., "Framework of Supporting Applications 593 Specific Multicast in NVO3", draft-ghanwani-nvo3-app- 594 mcast-framework-02, work in progress. 596 [RFC1035] Mockapetris, P., "DOMAIN NAMES - Implementation and 597 Specification", RFC1035, November 1987. 599 [RFC1631] Egevang, K., Francis, P., "The IP network Address 600 Translator (NAT)", RFC1631, May 1994. 602 [RFC4301] Kent, S., "Security Architecture for the Internet 603 Protocol", rfc4301, December 2005 605 [RFC4364] Rosen, E. and Y. Rekhter, "BGP/MPLS IP Virtual Private 606 Networks (VPNs)", RFC 4364, February 2006. 608 [RFC7348] Mahalingam,M., Dutt, D., ific Multicast in etc "VXLAN: A 609 Framework for Overlaying Virtualized Layer 2 Networks over 610 Layer 3 Networks", RFC7348 August 2014. 612 [RFC7432] Sajassi, A., Ed., Aggarwal, R., Bitar, N., Isaac, A. and 613 J. Uttaro, "BGP MPLS Based Ethernet VPN", RFC7432, 614 February 2015 616 [RFC7637] Garg, P., and Wang, Y., "NVGRE: Network Virtualization 617 using Generic Routing Encapsulation", RFC7637, Sept. 2015. 619 [VRF-LITE] Cisco, "Configuring VRF-lite", http://www.cisco.com 621 Contributors 623 Vinay Bannai 624 PayPal 625 2211 N. First St, 626 San Jose, CA 95131 627 Phone: +1-408-967-7784 628 Email: vbannai@paypal.com 629 Ram Krishnan 630 Brocade Communications 631 San Jose, CA 95134 632 Phone: +1-408-406-7890 633 Email: ramk@brocade.com 635 Kieran Milne 636 Juniper Networks 637 1133 Innovation Way 638 Sunnyvale, CA 94089 639 Phone: +1-408-745-2000 640 Email: kmilne@juniper.net 642 Acknowledgements 644 Authors like to thank Sue Hares, Young Lee, David Black, Pedro 645 Marques, Mike McBride, David McDysan, Randy Bush, Uma Chunduri, and 646 Eric Gray for the review, comments, and suggestions. 648 Authors' Addresses 650 Lucy Yong 651 Huawei Technologies 653 Phone: +1-918-808-1918 654 Email: lucy.yong@huawei.com 656 Mehmet Toy 657 Comcast 658 1800 Bishops Gate Blvd., 659 Mount Laurel, NJ 08054 661 Phone : +1-856-792-2801 662 E-mail : mehmet_toy@cable.comcast.com 664 Aldrin Isaac 665 Bloomberg 666 E-mail: aldrin.isaac@gmail.com 668 Vishwas Manral 669 Ionas Networks 671 Email: vishwas@ionosnetworks.com 673 Linda Dunbar 674 Huawei Technologies, 675 5340 Legacy Dr. 676 Plano, TX 75025 US 678 Phone: +1-469-277-5840 679 Email: linda.dunbar@huawei.com