idnits 2.17.00 (12 Aug 2021) /tmp/idnits33451/draft-ietf-nvo3-use-case-04.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (July 1, 2014) is 2881 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'ITU-T Y.1731' is mentioned on line 551, but not defined == Outdated reference: draft-sridharan-virtualization-nvgre has been published as RFC 7637 == Outdated reference: draft-ietf-nvo3-arch has been published as RFC 8014 == Outdated reference: draft-ietf-nvo3-overlay-problem-statement has been published as RFC 7364 == Outdated reference: draft-ietf-nvo3-framework has been published as RFC 7365 == Outdated reference: A later version (-01) exists of draft-ghanwani-nvo3-mcast-issues-00 == Outdated reference: draft-mahalingam-dutt-dcops-vxlan has been published as RFC 7348 Summary: 0 errors (**), 0 flaws (~~), 8 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group L. Yong 2 Internet Draft Huawei 3 Category: Informational M. Toy 4 Comcast 5 A. Isaac 6 Bloomberg 7 V. Manral 8 Hewlett-Packard 9 L. Dunbar 10 Huawei 12 Expires: January 2015 July 1, 2014 14 Use Cases for DC Network Virtualization Overlays 16 draft-ietf-nvo3-use-case-04 18 Abstract 20 This document describes DC Network Virtualization (NVO3) use cases 21 that may be potentially deployed in various data centers and apply 22 to different applications. 24 Status of this Memo 26 This Internet-Draft is submitted to IETF in full conformance with 27 the provisions of BCP 78 and BCP 79. 29 Internet-Drafts are working documents of the Internet Engineering 30 Task Force (IETF), its areas, and its working groups. Note that 31 other groups may also distribute working documents as Internet- 32 Drafts. 34 Internet-Drafts are draft documents valid for a maximum of six 35 months and may be updated, replaced, or obsoleted by other documents 36 at any time. It is inappropriate to use Internet-Drafts as reference 37 material or to cite them other than as "work in progress." 39 The list of current Internet-Drafts can be accessed at 40 http://www.ietf.org/ietf/1id-abstracts.txt. 42 The list of Internet-Draft Shadow Directories can be accessed at 43 http://www.ietf.org/shadow.html. 45 This Internet-Draft will expire on January, 2015. 47 Copyright Notice 49 Copyright (c) 2014 IETF Trust and the persons identified as the 50 document authors. All rights reserved. 52 This document is subject to BCP 78 and the IETF Trust's Legal 53 Provisions Relating to IETF Documents 54 (http://trustee.ietf.org/license-info) in effect on the date of 55 publication of this document. Please review these documents 56 carefully, as they describe your rights and restrictions with 57 respect to this document. Code Components extracted from this 58 document must include Simplified BSD License text as described in 59 Section 4.e of the Trust Legal Provisions and are provided without 60 warranty as described in the Simplified BSD License. 62 Table of Contents 64 1. Introduction ................................................ 3 65 1.1. Contributors ........................................... 4 66 1.2. Terminology ............................................ 4 67 2. Basic Virtual Networks in a Data Center ..................... 4 68 3. Interconnecting DC Virtual Network and External Networks .... 6 69 3.1. DC Virtual Network Access via Internet ................. 6 70 3.2. DC VN and Enterprise Sites interconnected via SP WAN ... 7 71 4. DC Applications Using NVO3 .................................. 8 72 4.1. Supporting Multi Technologies and Applications in a DC . 9 73 4.2. Tenant Network with Multi-Subnets or across multi DCs .. 9 74 4.3. Virtualized Data Center (vDC) ......................... 11 75 5. OAM Considerations ......................................... 12 76 6. Summary .................................................... 13 77 7. Security Considerations .................................... 14 78 8. IANA Considerations ........................................ 14 79 9. Acknowledgements ........................................... 14 80 10. References ................................................ 14 81 10.1. Normative References ................................. 14 82 10.2. Informative References ............................... 15 83 Authors' Addresses ............................................ 15 85 1. Introduction 87 Server Virtualization has changed IT industry in terms of efficiency, 88 cost, and the speed in providing a new applications and/or services. 89 However, today's data center networks have limited support for cloud 90 applications and multi tenant networks.[NVO3PRBM] The goal of DC 91 Network Virtualization Overlays, i.e. NVO3, is to decouple the 92 communication among tenant systems from DC physical networks and to 93 allow one physical network infrastructure to provide: 1) multi- 94 tenant virtual networks and traffic isolation among the virtual 95 networks over the same physical network; 2) independent address 96 spaces in individual virtual networks such as MAC, IP, TCP/UDP etc; 97 3) Flexible VMs or workload placement including the ability to move 98 them from servers to other servers without requiring VM address and 99 configuration change and the ability doing a hot move in which no 100 disruption to the live application on VM. These characteristics will 101 help address the issues in today's cloud applications [NVO3PRBM]. 103 An NVO3 network is necessary to interconnect with a physical network, 104 where tenant systems attach to the both networks. For examples: 1) 105 DCs that migrates toward NVO3 solution will be done in steps; 2) a 106 lot of DC applications are served to Internet users which exist on 107 physical networks; 3) some applications are CPU bound like Big Data 108 analytics and may not run on virtualized resources. 110 This document is to describe general NVO3 use cases that apply to 111 various data centers. Three types of the use cases described here 112 are: 114 o Basic virtual networks in DC. All TS of the virtual networks are 115 located within one DC. The Virtual networks can be either L2 or 116 L3. The number of Virtual Networks to be supported in NVO3 is 117 usually more than what traditional VLAN can support. The case is 118 often referred as to the DC East-West traffic. 120 o Virtual networks that span across multiple Data Centers or 121 customer premises, i.e. a Virtual Network that has some nodes in 122 a DC and other nodes in other places. An enterprise customer may 123 use a traditional VPN provided by a carrier or an IPsec tunnel 124 over Internet to connect the TSs across multiple DCs and customer 125 premises. 127 o DC applications or services that may use NVO3. Three scenarios 128 are described: 1) use NVO3 and other network technologies to 129 build a tenant network; 2) construct several virtual networks as 130 a tenant network; 3) apply NVO3 to a virtualized DC (vDC). 132 The document uses the architecture reference model defined in 133 [NVO3FRWK] to describe the use cases. 135 1.1. Contributors 137 Vinay Bannai 138 PayPal 139 2211 N. First St, 140 San Jose, CA 95131 141 Phone: +1-408-967-7784 142 Email: vbannai@paypal.com 144 Ram Krishnan 145 Brocade Communications 146 San Jose, CA 95134 147 Phone: +1-408-406-7890 148 Email: ramk@brocade.com 150 1.2. Terminology 152 This document uses the terminologies defined in [NVO3FRWK], 153 [RFC4364]. Some additional terms used in the document are listed 154 here. 156 CPE: Customer Premise Equipment 158 DMZ: Demilitarized Zone. A computer or small subnetwork that sits 159 between a trusted internal network, such as a corporate private LAN, 160 and an un-trusted external network, such as the public Internet. 162 DNS: Domain Name Service 164 NAT: Network Address Translation 166 VIRB: Virtual Integrated Routing/Bridging 168 Note that a virtual network in this document is an overlay virtual 169 network instance. 171 2. Basic Virtual Networks in a Data Center 173 A virtual network may exist within a DC. The network enables a 174 communication among Tenant Systems (TS). A TS may be a physical 175 server/device or a virtual machine (VM) on a server. A network 176 virtual edge (NVE) may be co-located with a TS, i.e. on a same end- 177 device, or reside on a different device, e.g. a top of rack switch 178 (ToR). A virtual network has a unique virtual network identifier 179 (may be local or global unique) for an NVE to properly differentiate 180 it from other virtual networks. 182 Tenant Systems attached to the same NVE may belong to the same or 183 different virtual network. The multiple virtual networks can be 184 constructed in a way so that the policies are enforced when the TSs 185 in one virtual network communicate with the TSs in other virtual 186 networks. An NVE provides tenant traffic forwarding/encapsulation 187 and obtains tenant systems reachability information from Network 188 Virtualization Authority (NVA)[NVO3ARCH]. Furthermore in a DC 189 operators may construct many tenant networks that have no 190 communication in between at all. In this case, each tenant network 191 may use its own address spaces such as MAC and IP. One tenant 192 network may have one or more virtual networks. 194 A Tenant System may also be configured with one or multiple 195 addresses and participate in multiple virtual networks, i.e. use the 196 same or different address in different virtual networks. For 197 examples, a TS may be a NAT GW or a firewall and connect to more 198 than one virtual network. 200 Network Virtualization Overlay in this context means that a virtual 201 network is implemented with an overlay technology, i.e. traffic from 202 an NVE to another is sent via a tunnel between a pair of 203 NVEs.[NVO3FRWK] This architecture decouples tenant system address 204 scheme and configuration from the infrastructure's, which brings a 205 great flexibility for VM placement and mobility. This also makes the 206 transit nodes in the infrastructure not aware of the existence of 207 the virtual networks. One tunnel may carry the traffic belonging to 208 different virtual networks; a virtual network identifier is used for 209 traffic demultiplexing. 211 A virtual network may be an L2 or L3 domain. The TSs attached to an 212 NVE may belong to different virtual networks that may be in L2 or 213 L3. A virtual network may carry unicast traffic and/or 214 broadcast/multicast/unknown traffic from/to tenant systems. There 215 are several ways to transport BUM traffic.[NVO3MCAST] 217 It is worth to mention two distinct cases here. The first is that 218 TSs and NVE are co-located on a same end device, which means that 219 the NVE can be made aware of the TS state at any time via internal 220 API. The second is that TSs and NVE are remotely connected, i.e. 221 connected via a switched network or point-to-point link. In this 222 case, a protocol is necessary for NVE to know TS state. 224 One virtual network may connect many TSs that attach to many 225 different NVEs. TS dynamic placement and mobility results in 226 frequent changes in the TS and NVE bindings. The TS reachbility 227 update mechanism need be fast enough to not cause any service 228 interruption. The capability of supporting many TSs in a virtual 229 network and many more virtual networks in a DC is critical for NVO3 230 solution. 232 If a virtual network spans across multiple DC sites, one design is 233 to allow the network seamlessly to span across the sites without DC 234 gateway routers' termination. In this case, the tunnel between a 235 pair of NVEs may in turn be tunneled over other intermediate tunnels 236 over the Internet or other WANs, or the intra DC and inter DC 237 tunnels are stitched together to form an end-to-end virtual network 238 across DCs. 240 3. Interconnecting DC Virtual Network and External Networks 242 For customers (an enterprise or individuals) who utilize the DC 243 provider's compute and storage resources to run their applications, 244 they need to access their systems hosted in a DC through Internet or 245 Service Providers' WANs. A DC provider may construct a virtual 246 network that connect all the resources designated for a customer and 247 allow the customer to access their resources via a virtual gateway 248 (vGW). This, in turn, becomes the case of interconnecting a DC 249 virtual network and the network at customer site(s) via Internet or 250 WANs. Two cases are described here. 252 3.1. DC Virtual Network Access via Internet 254 A customer can connect to a DC virtual network via Internet in a 255 secure way. Figure 1 illustrates this case. A virtual network is 256 configured on NVE1 and NVE2 and two NVEs are connected via an L3 257 tunnel in the Data Center. A set of tenant systems are attached to 258 NVE1 on a server. The NVE2 resides on a DC Gateway device. NVE2 259 terminates the tunnel and uses the VNID on the packet to pass the 260 packet to the corresponding vGW entity on the DC GW. A customer can 261 access their systems, i.e. TS1 or TSn, in the DC via Internet by 262 using IPsec tunnel [RFC4301]. The IPsec tunnel is configured between 263 the vGW and the customer gateway at customer site. Either static 264 route or BGP may be used for peer routes. The vGW provides IPsec 265 functionality such as authentication scheme and encryption. Note 266 that: 1) some vGW functions such as firewall and load balancer may 267 also be performed by locally attached network appliance devices; 2) 268 The virtual network in DC may use different address space than 269 external users, then vGW need to provide the NAT function; 3) more 270 than one IPsec tunnels can be configured for the redundancy; 4) vGW 271 may be implemented on a server or VM. In this case, IP tunnels or 272 IPsec tunnels may be used over DC infrastructure. 274 Server+---------------+ 275 | TS1 TSn | 276 | |...| | 277 | +-+---+-+ | Customer Site 278 | | NVE1 | | +-----+ 279 | +---+---+ | | CGW | 280 +------+--------+ +--+--+ 281 | * 282 L3 Tunnel * 283 | * 284 DC GW +------+---------+ .--. .--. 285 | +---+---+ | ( '* '.--. 286 | | NVE2 | | .-.' * ) 287 | +---+---+ | ( * Internet ) 288 | +---+---+. | ( * / 289 | | vGW | * * * * * * * * '-' '-' 290 | +-------+ | | IPsec \../ \.--/' 291 | +--------+ | Tunnel 292 +----------------+ 294 DC Provider Site 296 Figure 1 DC Virtual Network Access via Internet 298 3.2. DC VN and Enterprise Sites interconnected via SP WAN 300 An enterprise company may lease the VM and storage resources hosted 301 in the 3rd party DC to run its applications. For example, the 302 company may run its web applications at 3 party sites but run 303 backend applications in own DCs. The Web applications and backend 304 applications need to communicate privately. The 3 party DC may 305 construct one or more virtual networks to connect all VMs and 306 storage running the Enterprise Web applications. The company may buy 307 a p2p private tunnel such as VPWS from a SP to interconnect its site 308 and the virtual network at the 3rd party site. A protocol is 309 necessary for exchanging the reachability between two peering points 310 and the traffic are carried over the tunnel. If an enterprise has 311 multiple sites, it may buy multiple p2p tunnels to form a mesh 312 interconnection among the sites and the third party site. This 313 requires each site peering with all other sites for route 314 distribution. 316 Another way to achieve multi-site interconnection is to use Service 317 Provider (SP) VPN services, in which each site only peers with SP PE 318 site. A DC Provider and VPN SP may build a DC virtual network (VN) 319 and VPN independently. The VPN interconnects several enterprise 320 sites and the DC virtual network at DC site, i.e. VPN site. The DC 321 VN and SP VPN interconnect via a local link or a tunnel. The control 322 plan interconnection options are described in RFC4364 [RFC4364]. In 323 Option A with VRF-LITE [VRF-LITE], both DC GW and SP PE maintain a 324 routing/forwarding table, and perform the table lookup in forwarding. 325 In Option B, DC GW and SP PE do not maintain the forwarding table, 326 it only maintains the VN and VPN identifier mapping, and swap the 327 identifier on the packet in the forwarding process. Both option A 328 and B requires tunnel termination. In option C, DC GW and SP PE use 329 the same identifier for VN and VPN, and just perform the tunnel 330 stitching, i.e. change the tunnel end points. Each option has 331 pros/cons (see RFC4364) and has been deployed in SP networks 332 depending on the applications. The BGP protocols may be used in 333 these options for route distribution. Note that if the provider DC 334 is the SP Data Center, the DC GW and PE in this case may be on one 335 device. 337 This configuration allows the enterprise networks communicating to 338 the tenant systems attached to the VN in a provider DC without 339 interfering with DC provider underlying physical networks and other 340 virtual networks in the DC. The enterprise may use its own address 341 space on the tenant systems in the VN. The DC provider can manage 342 which VM and storage attachment to the VN. The enterprise customer 343 manages what applications to run on the VMs in the VN. See Section 4 344 for more. 346 The interesting feature in this use case is that the VN and compute 347 resource are managed by the DC provider. The DC operator can place 348 them at any server without notifying the enterprise and WAN SP 349 because the DC physical network is completely isolated from the 350 carrier and enterprise network. Furthermore, the DC operator may 351 move the VMs assigned to the enterprise from one sever to another in 352 the DC without the enterprise customer awareness, i.e. no impact on 353 the enterprise 'live' applications running these resources. Such 354 advanced features bring DC providers great benefits in serving cloud 355 applications but also add some requirements for NVO3 [NVO3PRBM]. 357 4. DC Applications Using NVO3 359 NVO3 brings DC operators the flexibility in designing and deploying 360 different applications in an end-to-end virtualization overlay 361 environment, where the operators no longer need to worry about the 362 constraints of the DC physical network configuration when creating 363 VMs and configuring a virtual network. DC provider may use NVO3 in 364 various ways and also use it in the conjunction with physical 365 networks in DC for many reasons. This section just highlights some 366 use cases. 368 4.1. Supporting Multi Technologies and Applications in a DC 370 Most likely servers deployed in a large data center are rolled in at 371 different times and may have different capacities/features. Some 372 servers may be virtualized, some may not; some may be equipped with 373 virtual switches, some may not. For the ones equipped with 374 hypervisor based virtual switches, some may support VxLAN [VXLAN] 375 encapsulation, some may support NVGRE encapsulation [NVGRE], and 376 some may not support any types of encapsulation. To construct a 377 tenant network among these servers and the ToR switches, it may 378 construct one virtual network and one traditional VLAN network; or 379 two virtual networks that one uses VxLAN encapsulation and another 380 uses NVGRE. 382 In these cases, a gateway device or virtual GW is used to 383 participate in multiple virtual networks. It performs the packet 384 encapsulation/decapsulation and may also perform address mapping or 385 translation, and etc. 387 A data center may be also constructed with multi-tier zones. Each 388 zone has different access permissions and run different applications. 389 For example, the three-tier zone design has a front zone (Web tier) 390 with Web applications, a mid zone (application tier) with service 391 applications such as payment and booking, and a back zone (database 392 tier) with Data. External users are only able to communicate with 393 the Web application in the front zone. In this case, the 394 communication between the zones must pass through the security 395 GW/firewall. One virtual network may be configured in each zone and 396 a GW is used to interconnect two virtual networks. If individual 397 zones use the different implementations, the GW needs to support 398 these implementations as well. 400 4.2. Tenant Network with Multi-Subnets or across multi DCs 402 A tenant network may contain multiple subnets. The DC physical 403 network needs support the connectivity for many tenant networks. The 404 inter-subnets policies may be placed at some designated gateway 405 devices only. Such design requires the inter-subnet traffic to be 406 sent to one of the gateways first for the policy checking, which may 407 cause traffic hairpin at the gateway in a DC. It is desirable that 408 an NVE can hold some policies and be able to forward inter-subnet 409 traffic directly. To reduce NVE burden, the hybrid design may be 410 deployed, i.e. an NVE can perform forwarding for the selected inter- 411 subnets and the designated GW performs for the rest. For example, 412 each NVE performs inter-subnet forwarding for a tenant, and the 413 designated GW is used for inter-subnet traffic from/to the different 414 tenant networks. 416 A tenant network may span across multiple Data Centers in distance. 417 DC operators may configure an L2 VN within each DC and an L3 VN 418 between DCs for a tenant network. For this configuration, the 419 virtual L2/L3 gateway can be implemented on DC GW device. Figure 2 420 illustrates this configuration. 422 Figure 2 depicts two DC sites. The site A constructs one L2 VN, say 423 L2VNa, on NVE1, NVE2, and NVE3. NVE1 and NVE2 reside on the servers 424 which host multiple tenant systems. NVE3 resides on the DC GW device. 425 The site Z has similar configuration with L2VNz on NVE3, NVE4, and 426 NVE6. One L3 VN, say L3VNx, is configured on the NVE5 at site A and 427 the NVE6 at site Z. An internal Virtual Interface of Routing and 428 Bridging (VIRB) is used between L2VNI and L3VNI on NVE5 and NVE6, 429 respectively. The L2VNI is the MAC/NVE mapping table and the L3VNI 430 is the IP prefix/NVE mapping table. A packet to the NVE5 from L2VNa 431 will be decapsulated and converted into an IP packet and then 432 encapsulated and sent to the site Z. The policies can be checked at 433 VIRB. 435 Note that the L2VNa, L2VNz, and L3VNx in Figure 2 are overlay 436 virtual networks. 438 NVE5/DCGW+------------+ +-----------+ NVE6/DCGW 439 | +-----+ | '''''''''''''''' | +-----+ | 440 | |L3VNI+----+' L3VNx '+---+L3VNI| | 441 | +--+--+ | '''''''''''''''' | +--+--+ | 442 | |VIRB | | VIRB| | 443 | +--+---+ | | +---+--+ | 444 | |L2VNIs| | | |L2VNIs| | 445 | +--+---+ | | +---+--+ | 446 +----+-------+ +------+----+ 447 ''''|'''''''''' ''''''|''''''' 448 ' L2VNa ' ' L2VNz ' 449 NVE1/S ''/'''''''''\'' NVE2/S NVE3/S'''/'''''''\'' NVE4/S 450 +-----+---+ +----+----+ +------+--+ +----+----+ 451 | +--+--+ | | +--+--+ | | +---+-+ | | +--+--+ | 452 | |L2VNI| | | |L2VNI| | | |L2VNI| | | |L2VNI| | 453 | ++---++ | | ++---++ | | ++---++ | | ++---++ | 454 +--+---+--+ +--+---+--+ +--+---+--+ +--+---+--+ 455 |...| |...| |...| |...| 457 Tenant Systems Tenant Systems 459 DC Site A DC Site Z 461 Figure 2 Tenant Virtual Network with Bridging/Routing 463 4.3. Virtualized Data Center (vDC) 465 Enterprise DC's today may deploy routers, switches, and network 466 appliance devices to construct its internal network, DMZ, and 467 external network access and have many servers and storage running 468 various applications. A DC Provider may construct a virtualized DC 469 over its DC infrastructure and offer a virtual DC service to 470 enterprise customers. A vDC provides the same capability as a 471 physical DC. A customer manages what and how applications to run in 472 the vDC. Instead of using many hardware devices to do it, with the 473 network virtualization overlay technology, DC operators may build 474 such vDCs on top of a common DC infrastructure for many such 475 customers and offer network service functions to a vDC. The network 476 service functions may include firewall, DNS, load balancer, gateway, 477 etc. The network virtualization overlay further enables potential 478 for vDC mobility when a customer moves to different locations 479 because vDC configuration is decouple from the infrastructure 480 network. 482 Figure 3 below illustrates one scenario. For the simple 483 illustration, it only shows the L3 VN or L2 VN as virtual routers or 484 switches. In this case, DC operators create several L2 VNs (L2VNx, 485 L2VNy, L2VNz) in Figure 3 to group the tenant systems together per 486 application basis, create one L3 VN, e.g. VNa for the internal 487 routing. A net device (may be a VM or server) runs firewall/gateway 488 applications and connects to the L3VNa and Internet. A load balancer 489 (LB) is used in L2 VNx. A VPWS p2p tunnel is also built between the 490 gateway and enterprise router. Enterprise customer runs 491 Web/Mail/Voice applications at the provider DC site; lets the users 492 at Enterprise site to access the applications via the VPN tunnel and 493 Internet via a gateway at the Enterprise site; let Internet users 494 access the applications via the gateway in the provider DC. 496 The customer decides which applications are accessed by intranet 497 only and which by both intranet and extranet and configures the 498 proper security policy and gateway function. Furthermore a customer 499 may want multi-zones in a vDC for the security and/or set different 500 QoS levels for the different applications. 502 This use case requires the NVO3 solution to provide the DC operator 503 an easy way to create a VN and NVEs for any design and to quickly 504 assign TSs to VNIs on a NVE they attach to, easily to set up virtual 505 topology and place or configure policies on an NVE or VMs that run 506 net services, and support VM mobility. Furthermore a DC operator 507 and/or customer should be able to view the vDC topology and access 508 individual virtual components in the vDC. Either DC provider or 509 tenant can provision virtual components in the vDC. It is desirable 510 to automate the provisioning process and have programmability. 512 Internet ^ Internet 513 | 514 ^ +--+---+ 515 | | GW | 516 | +--+---+ 517 | | 518 +-------+--------+ +--+---+ 519 |Firewall/Gateway+--- VPN-----+router| 520 +-------+--------+ +-+--+-+ 521 | | | 522 ...+.... |..| 523 +-------: L3 VNa :---------+ LANs 524 +-+-+ ........ | 525 |LB | | | Enterprise Site 526 +-+-+ | | 527 ...+... ...+... ...+... 528 : L2VNx : : L2VNy : : L2VNx : 529 ....... ....... ....... 530 |..| |..| |..| 531 | | | | | | 532 Web Apps Mail Apps VoIP Apps 534 Provider DC Site 536 firewall/gateway and Load Balancer (LB) may run on a server or VMs 538 Figure 3 Virtual Data Center by Using NVO3 540 5. OAM Considerations 542 NVO3 brings the ability for a DC provider to segregate tenant 543 traffic. A DC provider needs to manage and maintain NVO3 instances. 545 Similarly, the tenant needs to be informed about underlying network 546 failures impacting tenant applications or the tenant network is able 547 to detect both overlay and underlay network failures and builds some 548 resiliency mechanisms. 550 Various OAM and SOAM tools and procedures are defined in [IEEE 551 802.1ag], [ITU-T Y.1731], [RFC4378], [RFC5880], [ITU-T Y.1564] for 552 L2 and L3 networks, and for user, including continuity check, 553 loopback, link trace, testing, alarms such as AIS/RDI, and on-demand 554 and periodic measurements. These procedures may apply to tenant 555 overlay networks and tenants not only for proactive maintenance, but 556 also to ensure support of Service Level Agreements (SLAs). 558 As the tunnel traverses different networks, OAM messages need to be 559 translated at the edge of each network to ensure end-to-end OAM. 561 6. Summary 563 The document describes some general potential use cases of NVO3 in 564 DCs. The combination of these cases should give operators 565 flexibility and capability to design more sophisticated cases for 566 various purposes. 568 DC services may vary from infrastructure as a service (IaaS), 569 platform as a service (PaaS), to software as a service (SaaS), in 570 which the network virtualization overlay is just a portion of an 571 application service. NVO3 decouples the service 572 construction/configurations from the DC network infrastructure 573 configuration, and helps deployment of higher level services over 574 the application. 576 NVO3's underlying network provides the tunneling between NVEs so 577 that two NVEs appear as one hop to each other. Many tunneling 578 technologies can serve this function. The tunneling may in turn be 579 tunneled over other intermediate tunnels over the Internet or other 580 WANs. It is also possible that intra DC and inter DC tunnels are 581 stitched together to form an end-to-end tunnel between two NVEs. 583 A DC virtual network may be accessed by external users in a secure 584 way. Many existing technologies can help achieve this. 586 NVO3 implementations may vary. Some DC operators prefer to use 587 centralized controller to manage tenant system reachbility in a 588 tenant network, other prefer to use distributed protocols to 589 advertise the tenant system location, i.e. associated NVEs. For the 590 migration and special requirement, the different solutions may apply 591 to one tenant network in a DC. When a tenant network spans across 592 multiple DCs and WANs, each network administration domain may use 593 different methods to distribute the tenant system locations. Both 594 control plane and data plane interworking are necessary. 596 7. Security Considerations 598 Security is a concern. DC operators need to provide a tenant a 599 secured virtual network, which means one tenant's traffic isolated 600 from the other tenant's traffic and non-tenant's traffic; they also 601 need to prevent DC underlying network from any tenant application 602 attacking through the tenant virtual network or one tenant 603 application attacking another tenant application via DC networks. 604 For example, a tenant application attempts to generate a large 605 volume of traffic to overload DC underlying network. The NVO3 606 solution has to address these issues. 608 8. IANA Considerations 610 This document does not request any action from IANA. 612 9. Acknowledgements 614 Authors like to thank Sue Hares, Young Lee, David Black, Pedro 615 Marques, Mike McBride, David McDysan, Randy Bush, Uma Chunduri, and 616 Eric Gray for the review, comments, and suggestions. 618 10. References 620 10.1. Normative References 622 [RFC4364] Rosen, E. and Y. Rekhter, "BGP/MPLS IP Virtual Private 623 Networks (VPNs)", RFC 4364, February 2006. 625 [IEEE 802.1ag] "Virtual Bridged Local Area Networks - Amendment 5: 626 Connectivity Fault Management", December 2007. 628 [ITU-T G.8013/Y.1731] OAM Functions and Mechanisms for Ethernet 629 based Networks, 2011. 631 [ITU-T Y.1564] "Ethernet service activation test methodology", 2011. 633 [RFC4378] Allan, D., Nadeau, T., "A Framework for Multi-Protocol 634 Label Switching (MPLS) Operations and Management (OAM)", 635 RFC4378, February 2006 637 [RFC4301] Kent, S., "Security Architecture for the Internet 638 Protocol", rfc4301, December 2005 640 [RFC5880] Katz, D. and Ward, D., "Bidirectional Forwarding Detection 641 (BFD)", rfc5880, June 2010. 643 10.2. Informative References 645 [NVGRE] Sridharan, M., "NVGRE: Network Virtualization using Generic 646 Routing Encapsulation", draft-sridharan-virtualization- 647 nvgre-03, work in progress. 649 [NVO3ARCH] Black, D., et al, "An Architecture for Overlay Networks 650 (NVO3)", draft-ietf-nvo3-arch-00, work in progress. 652 [NVO3PRBM] Narten, T., et al "Problem Statement: Overlays for 653 Network Virtualization", draft-ietf-nvo3-overlay-problem- 654 statement-04, work in progress. 656 [NVO3FRWK] Lasserre, M., Motin, T., and et al, "Framework for DC 657 Network Virtualization", draft-ietf-nvo3-framework-04, 658 work in progress. 660 [NVO3MCAST] Ghanwani, A., "Multicast Issues in Networks Using NVO3", 661 draft-ghanwani-nvo3-mcast-issues-00, work in progress. 663 [VRF-LITE] Cisco, "Configuring VRF-lite", http://www.cisco.com 665 [VXLAN] Mahalingam,M., Dutt, D., etc "VXLAN: A Framework for 666 Overlaying Virtualized Layer 2 Networks over Layer 3 667 Networks", draft-mahalingam-dutt-dcops-vxlan-06.txt, work 668 in progress. 670 Authors' Addresses 672 Lucy Yong 674 Phone: +1-918-808-1918 675 Email: lucy.yong@huawei.com 677 Mehmet Toy 678 Comcast 679 1800 Bishops Gate Blvd., 680 Mount Laurel, NJ 08054 682 Phone : +1-856-792-2801 683 E-mail : mehmet_toy@cable.comcast.com 685 Aldrin Isaac 686 Bloomberg 687 E-mail: aldrin.isaac@gmail.com 689 Vishwas Manral 690 Hewlett-Packard Corp. 691 3000 Hanover Street, Building 20C 692 Palo Alto, CA 95014 694 Phone: 650-857-5501 695 Email: vishwas.manral@hp.com 697 Linda Dunbar 698 Huawei Technologies, 699 5340 Legacy Dr. 700 Plano, TX 75025 US 702 Phone: +1-469-277-5840 703 Email: linda.dunbar@huawei.com