idnits 2.17.00 (12 Aug 2021) /tmp/idnits59785/draft-ietf-nvo3-hpvr2nve-cp-req-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'SHOULD not' in this paragraph: The external NVE connected to destination hypervisor 2 has to associate the migrating VM's TSI with it by discovering the TSI's MAC and/or IP addresses, its VN, locally significant VID if any, and provisioning other network related parameters of the TSI. The external NVE may be informed about the VM's peer VMs, storage devices and other network appliances with which the VM needs to communicate or is communicating. The migrated VM on destination hypervisor 2 SHOULD not go to Running state before all the network provisioning and binding has been done. == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'SHOULD not' in this paragraph: The migrating VM SHOULD not be in Running state at the same time on the source hypervisor and destination hypervisor during migration. The VM on the source hypervisor does not transition into Shutdown state until the VM successfully enters the Running state on the destination hypervisor. It is possible that VM on the source hypervisor stays in Migrating state for a while after VM on the destination hypervisor is in Running state. -- The document date (November 18, 2014) is 2741 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'RFC2236' is mentioned on line 633, but not defined == Unused Reference: 'I-D.ietf-nvo3-nve-nva-cp-req' is defined on line 741, but no explicit reference was found in the text == Unused Reference: '8021Q' is defined on line 759, but no explicit reference was found in the text == Outdated reference: A later version (-05) exists of draft-ietf-nvo3-nve-nva-cp-req-01 == Outdated reference: draft-ietf-opsawg-vmm-mib has been published as RFC 7666 Summary: 0 errors (**), 0 flaws (~~), 8 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 NVO3 Working Group Yizhou Li 3 INTERNET-DRAFT Lucy Yong 4 Intended Status: Informational Huawei Technologies 5 Lawrence Kreeger 6 Cisco 7 Thomas Narten 8 IBM 9 David Black 10 EMC 11 Expires: May 22, 2015 November 18, 2014 13 Hypervisor to NVE Control Plane Requirements 14 draft-ietf-nvo3-hpvr2nve-cp-req-01 16 Abstract 18 In a Split-NVE architructure, the functions of the NVE are split 19 across the hypervisor/container on a server and an external network 20 equipment which is called an external NVE. A control plane 21 protocol(s) between a hypervisor and its associated external NVE(s) 22 is used for the hypervisor to distribute its virtual machine 23 networking state to the external NVE(s) for further handling. This 24 document illustrates the functionality required by this type of 25 control plane signaling protocol and outlines the high level 26 requirements. Virtual machine states as well as state transitioning 27 are summarized to help clarifying the needed protocol requirements. 29 Status of this Memo 31 This Internet-Draft is submitted to IETF in full conformance with the 32 provisions of BCP 78 and BCP 79. 34 Internet-Drafts are working documents of the Internet Engineering 35 Task Force (IETF), its areas, and its working groups. Note that 36 other groups may also distribute working documents as 37 Internet-Drafts. 39 Internet-Drafts are draft documents valid for a maximum of six months 40 and may be updated, replaced, or obsoleted by other documents at any 41 time. It is inappropriate to use Internet-Drafts as reference 42 material or to cite them other than as "work in progress." 44 The list of current Internet-Drafts can be accessed at 45 http://www.ietf.org/1id-abstracts.html 46 The list of Internet-Draft Shadow Directories can be accessed at 47 http://www.ietf.org/shadow.html 49 Copyright and License Notice 51 Copyright (c) 2013 IETF Trust and the persons identified as the 52 document authors. All rights reserved. 54 This document is subject to BCP 78 and the IETF Trust's Legal 55 Provisions Relating to IETF Documents 56 (http://trustee.ietf.org/license-info) in effect on the date of 57 publication of this document. Please review these documents 58 carefully, as they describe your rights and restrictions with respect 59 to this document. Code Components extracted from this document must 60 include Simplified BSD License text as described in Section 4.e of 61 the Trust Legal Provisions and are provided without warranty as 62 described in the Simplified BSD License. 64 Table of Contents 66 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 67 1.1 Terminology . . . . . . . . . . . . . . . . . . . . . . . . 4 68 1.2 Target Scenarios . . . . . . . . . . . . . . . . . . . . . 5 69 2. VM Lifecycle . . . . . . . . . . . . . . . . . . . . . . . . . 7 70 2.1 VM Creation Event . . . . . . . . . . . . . . . . . . . . . 7 71 2.2 VM Live Migration Event . . . . . . . . . . . . . . . . . . 8 72 2.3 VM Termination Event . . . . . . . . . . . . . . . . . . . . 9 73 2.4 VM Pause, Suspension and Resumption Events . . . . . . . . . 9 74 3. Hypervisor-to-NVE Control Plane Protocol Functionality . . . . 9 75 3.1 VN connect and Disconnect . . . . . . . . . . . . . . . . . 10 76 3.2 TSI Associate and Activate . . . . . . . . . . . . . . . . . 11 77 3.3 TSI Disassociate and Deactivate . . . . . . . . . . . . . . 14 78 4. Hypervisor-to-NVE Control Plane Protocol Requirements . . . . . 15 79 5. Security Considerations . . . . . . . . . . . . . . . . . . . . 16 80 6. IANA Considerations . . . . . . . . . . . . . . . . . . . . . . 17 81 7. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 17 82 8. References . . . . . . . . . . . . . . . . . . . . . . . . . . 17 83 8.1 Normative References . . . . . . . . . . . . . . . . . . . 17 84 8.2 Informative References . . . . . . . . . . . . . . . . . . 17 85 Appendix A. IEEE 802.1Qbg VDP Illustration (For information 86 only) . . . . . . . . . . . . . . . . . . . . . . . . . . 18 87 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 20 89 1. Introduction 91 In the Split-NVE architecture shown in Figure 1, the functionality of 92 the NVE is split across an end device supporting virtualization and 93 an external network device which is called an external NVE. The 94 portion of the NVE functionality located on the hypervisor/container 95 is called the tNVE and the portion located on the external NVE is 96 called the nNVE in this document. Overlay encapsulation/decapsulation 97 functions are normally off-loaded to the nNVE on the external NVE. 98 The tNVE is normally implemented as a part of hypervisor or container 99 in an virtualized end device. 101 The problem statement [RFC7364], discusses the needs for a control 102 plane protocol (or protocols) to populate each NVE with the state 103 needed to perform the required functions. In one scenario, an NVE 104 provides overlay encapsulation/decapsulation packet forwarding 105 services to Tenant Systems (TSs) that are co-resident within the NVE 106 on the same End Device (e.g. when the NVE is embedded within a 107 hypervisor or a Network Service Appliance). In such cases, there is 108 no need for a standardized protocol between the hypervisor and NVE, 109 as the interaction is implemented via software on a single device. 110 While in the Split-NVE architecture scenarios, as shown in figure 2 111 to figure 4, a control plane protocol(s) between a hypervisor and its 112 associated external NVE(s) is required for the hypervisor to 113 distribute the virtual machines networking states to the NVE(s) for 114 further handling. The protocol indeed is an NVE-internal protocol and 115 runs between tNVE and nNVE logical entities. This protocol is 116 mentioned in NVO3 problem statement [RFC7364] and appears as the 117 third work item. 119 Virtual machine states and state transitioning are summarized in this 120 document to show events where the NVE needs to take specific actions. 121 Such events might correspond to actions the control plane signaling 122 protocols between the hypervisor and external NVE will need to take. 123 Then the high level requirements to be fulfilled are outlined. 125 +-- -- -- -- Split-NVE -- -- -- --+ 126 | 127 | 128 +---------------|-----+ 129 | +------------- ----+| | 130 | | +--+ +---\|/--+|| +------ --------------+ 131 | | |VM|---+ ||| | \|/ | 132 | | +--+ | ||| |+--------+ | 133 | | +--+ | tNVE |||----- - - - - - -----|| | | 134 | | |VM|---+ ||| || nNVE | | 135 | | +--+ +--------+|| || | | 136 | | || |+--------+ | 137 | +--Hpvr/Container--+| +---------------------+ 138 +---------------------+ 140 End Device External NVE 142 Figure 1 Split-NVE structure 144 This document uses the term "hypervisor" throughout when describing 145 the Split-NVE scenario where part of the NVE functionality is off- 146 loaded to a separate device from the "hypervisor" that contains a VM 147 connected to a VN. In this context, the term "hypervisor" is meant to 148 cover any device type where part of the NVE functionality is off- 149 loaded in this fashion, e.g.,a Network Service Appliance, Linux 150 Container. 152 This document often uses the term "VM" and "Tenant System" (TS) 153 interchangeably, even though a VM is just one type of Tenant System 154 that may connect to a VN. For example, a service instance within a 155 Network Service Appliance may be another type of TS, or a system 156 running on an OS-level virtualization technologies like LinuX 157 Containers. When this document uses the term VM, it will in most 158 cases apply to other types of TSs. 160 Section 2 describes VM states and state transitioning in its 161 lifecycle. Section 3 introduces Hypervisor-to-NVE control plane 162 protocol functionality derived from VM operations and network events. 163 Section 4 outlines the requirements of the control plane protocol to 164 achieve the required functionality. 166 1.1 Terminology 168 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 169 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 170 document are to be interpreted as described in RFC 2119 [RFC2119]. 172 This document uses the same terminology as found in [RFC7365] and [I- 173 D.ietf-nvo3-nve-nva-cp-req]. This section defines additional 174 terminology used by this document. 176 Split-NVE: a type of NVE that the functionalities of it are split 177 across an end device supporting virtualization and an external 178 network device. 180 tNVE: the portion of Split-NVE functionalities located on the end 181 device supporting virtualization. 183 nNVE: the portion of Split-NVE functionalities located on the network 184 device which is directly or indirectly connects to the end device 185 holding the corresponding tNVE. 187 External NVE: the physical network device holding nNVE 189 Hypervisor/Container: the logical collection of software, firmware 190 and/or hardware that allows the creation and running of server or 191 service appliance virtualization. tNVE is located on 192 Hypervisor/Container. It is loosely used in this document to refer to 193 the end device supporting the virtualization. For simplicity, we also 194 use Hypervisor in this document to represent both hypervisor and 195 container. 197 VN Profile: Meta data associated with a VN that is applied to any 198 attachment point to the VN. That is, VAP properties that are appliaed 199 to all VAPs associated with a given VN and used by an NVE when 200 ingressing/egressing packets to/from a specific VN. Meta data could 201 include such information as ACLs, QoS settings, etc. The VN Profile 202 contains parameters that apply to the VN as a whole. Control 203 protocols between the NVE and NVA could use the VN ID or VN Name to 204 obtain the VN Profile. 206 VSI: Virtual Station Interface. [IEEE 802.1Qbg] 208 VDP: VSI Discovery and Configuration Protocol [IEEE 802.1Qbg] 210 1.2 Target Scenarios 212 In the Split-NVE architecture, an external NVE can provide an offload 213 of the encapsulation / decapsulation function, network policy 214 enforcement, as well as the VN Overlay protocol overhead. This 215 offloading may provide performance improvements and/or resource 216 savings to the End Device (e.g. hypervisor) making use of the 217 external NVE. 219 The following figures give example scenarios of a Split-NVE 220 architecture. 222 Hypervisor Access Switch 223 +------------------+ +-----+-------+ 224 | +--+ +-------+ | | | | 225 | |VM|---| | | VLAN | | | 226 | +--+ | tNVE |---------+ nNVE| +--- Underlying 227 | +--+ | | | Trunk | | | Network 228 | |VM|---| | | | | | 229 | +--+ +-------+ | | | | 230 +------------------+ +-----+-------+ 231 Figure 2 Hypervisor with an External NVE 233 Hypervisor L2 Switch 234 +---------------+ +-----+ +----+---+ 235 | +--+ +----+ | | | | | | 236 | |VM|---| | |VLAN | |VLAN | | | 237 | +--+ |tNVE|-------+ +-----+nNVE| +--- Underlying 238 | +--+ | | |Trunk| |Trunk| | | Network 239 | |VM|---| | | | | | | | 240 | +--+ +----+ | | | | | | 241 +---------------+ +-----+ +----+---+ 242 Figure 3 Hypervisor with an External NVE 243 across an Ethernet Access Switch 245 Network Service Appliance Access Switch 246 +--------------------------+ +-----+-------+ 247 | +------------+ | \ | | | | 248 | |Net Service |----| \ | | | | 249 | |Instance | | \ | VLAN | | | 250 | +------------+ |tNVE| |------+nNVE | +--- Underlying 251 | +------------+ | | | Trunk| | | Network 252 | |Net Service |----| / | | | | 253 | |Instance | | / | | | | 254 | +------------+ | / | | | | 255 +--------------------------+ +-----+-------+ 256 Figure 4 Physical Network Service Appliance with an External NVE 258 Tenant Systems connect to external NVEs via a Tenant System Interface 259 (TSI). The TSI logically connects to the external NVE via a Virtual 260 Access Point (VAP) [I-D.ietf-nvo3-arch]. The external NVE may provide 261 Layer 2 or Layer 3 forwarding. In the Split-NVE architecture, the 262 external NVE may be able to reach multiple MAC and IP addresses via a 263 TSI. For example, Tenant Systems that are providing network services 264 (such as transparent firewall, load balancer, VPN gateway) are likely 265 to have complex address hierarchy. This implies that if a given TSI 266 disassociates from one VN, all the MAC and/or IP addresses are also 267 disassociated. There is no need to signal the deletion of every MAC 268 or IP when the TSI is brought down or deleted. In the majority of 269 cases, a VM will be acting as a simple host that will have a single 270 TSI and single MAC and IP visible to the external NVE. 272 2. VM Lifecycle 274 Figure 2 of [I-D.ietf-opsawg-vmm-mib] shows the state transition of a 275 VM. Some of the VM states are of interest to the external NVE. This 276 section illustrates the relevant phases and events in the VM 277 lifecycle. It should be noted that the following subsections do not 278 give an exhaustive traversal of VM lifecycle state. They are intended 279 as the illustrative examples which are relevant to Split-NVE 280 architecture, not as prescriptive text; the goal is to capture 281 sufficient detail to set a context for the signaling protocol 282 functionality and requirements described in the following sections. 284 2.1 VM Creation Event 286 VM creation event makes the VM state transiting from Preparing to 287 Shutdown and then to Running [I-D.ietf-opsawg-vmm-mib]. The end 288 device allocates and initializes local virtual resources like storage 289 in the VM Preparing state. In Shutdown state, the VM has everything 290 ready except that CPU execution is not scheduled by the hypervisor 291 and VM's memory is not resident in the hypervisor. From the Shutdown 292 state to Running state, normally it requires the human execution or 293 system triggered event. Running state indicates the VM is in the 294 normal execution state. As part of transitioning the VM to the 295 Running state, the hypervisor must also provision network 296 connectivity for the VM's TSI(s) so that Ethernet frames can be sent 297 and received correctly. No ongoing migration, suspension or shutdown 298 is in process. 300 In the VM creation phase, the VM's TSI has to be associated with the 301 external NVE. Association here indicates that hypervisor and the 302 external NVE have signaled each other and reached some agreement. 303 Relevant networking parameters or information have been provisioned 304 properly. The External NVE should be informed of the VM's TSI MAC 305 address and/or IP address. In addition to external network 306 connectivity, the hypervisor may provide local network connectivity 307 between the VM's TSI and other VM's TSI that are co-resident on the 308 same hypervisor. When the intra or inter-hypervisor connectivity is 309 extended to the external NVE, a locally significant tag, e.g. VLAN 310 ID, should be used between the hypervisor and the external NVE to 311 differentiate each VN's traffic. Both the hypervisor and external NVE 312 sides must agree on that tag value for traffic identification, 313 isolation and forwarding. 315 The external NVE may need to do some preparation work before it 316 signals successful association with TSI. Such preparation work may 317 include locally saving the states and binding information of the 318 tenant system interface and its VN, communicating with the NVA for 319 network provisioning, etc. 321 Tenant System interface association should be performed before the VM 322 enters running state, preferably in Shutdown state. If association 323 with external NVE fails, the VM should not go into running state. 325 2.2 VM Live Migration Event 327 Live migration is sometimes referred to as "hot" migration, in that 328 from an external viewpoint, the VM appears to continue to run while 329 being migrated to another server (e.g., TCP connections generally 330 survive this class of migration). In contrast, "cold" migration 331 consists of shutdown VM execution on one server and restart it on 332 another. For simplicity, the following abstract summary about live 333 migration assumes shared storage, so that the VM's storage is 334 accessible to the source and destination servers. Assume VM live 335 migrates from hypervisor 1 to hypervisor 2. Such migration event 336 involves the state transition on both hypervisors, source hypervisor 337 1 and destination hypervisor 2. VM state on source hypervisor 1 338 transits from Running to Migrating and then to Shutdown [I-D.ietf- 339 opsawg-vmm-mib]. VM state on destination hypervisor 2 transits from 340 Shutdown to Migrating and then Running. 342 The external NVE connected to destination hypervisor 2 has to 343 associate the migrating VM's TSI with it by discovering the TSI's MAC 344 and/or IP addresses, its VN, locally significant VID if any, and 345 provisioning other network related parameters of the TSI. The 346 external NVE may be informed about the VM's peer VMs, storage devices 347 and other network appliances with which the VM needs to communicate 348 or is communicating. The migrated VM on destination hypervisor 2 349 SHOULD not go to Running state before all the network provisioning 350 and binding has been done. 352 The migrating VM SHOULD not be in Running state at the same time on 353 the source hypervisor and destination hypervisor during migration. 354 The VM on the source hypervisor does not transition into Shutdown 355 state until the VM successfully enters the Running state on the 356 destination hypervisor. It is possible that VM on the source 357 hypervisor stays in Migrating state for a while after VM on the 358 destination hypervisor is in Running state. 360 2.3 VM Termination Event 362 VM termination event is also referred to as "powering off" a VM. VM 363 termination event leads to its state going to Shutdown. There are two 364 possible causes to terminate a VM [I-D.ietf-opsawg-vmm-mib], one is 365 the normal "power off" of a running VM; the other is that VM has been 366 migrated to another hypervisor and the VM image on the source 367 hypervisor has to stop executing and to be shutdown. 369 In VM termination, the external NVE connecting to that VM needs to 370 deprovision the VM, i.e. delete the network parameters associated 371 with that VM. In other words, the external NVE has to de-associate 372 the VM's TSI. 374 2.4 VM Pause, Suspension and Resumption Events 376 The VM pause event leads to the VM transiting from Running state to 377 Paused state. The Paused state indicates that the VM is resident in 378 memory but no longer scheduled to execute by the hypervisor [I- 379 D.ietf-opsawg-vmm-mib]. The VM can be easily re-activated from Paused 380 state to Running state. 382 The VM suspension event leads to the VM transiting from Running state 383 to Suspended state. The VM resumption event leads to the VM 384 transiting state from Suspended state to Running state. Suspended 385 state means the memory and CPU execution state of the virtual machine 386 are saved to persistent store. During this state, the virtual 387 machine is not scheduled to execute by the hypervisor [I-D.ietf- 388 opsawg-vmm-mib]. 390 In the Split-NVE architecture, the external NVE should keep any 391 paused or suspended VM in association as the VM can return to Running 392 state at any time. 394 3. Hypervisor-to-NVE Control Plane Protocol Functionality 396 The following subsections show the illustrative examples of the state 397 transitions on external NVE which are relevant to Hypervisor-to-NVE 398 Signaling protocol functionality. It should be noted they are not 399 prescriptive text for full state machines. 401 3.1 VN connect and Disconnect 403 In Split-NVE scenario, a protocol is needed between the End 404 Device(e.g. Hypervisor) making use of the external NVE and the 405 external NVE in order to make the external NVE aware of the changing 406 VN membership requirements of the Tenant Systems within the End 407 Device. 409 A key driver for using a protocol rather than using static 410 configuration of the external NVE is because the VN connectivity 411 requirements can change frequently as VMs are brought up, moved and 412 brought down on various hypervisors throughout the data center or 413 external cloud. 415 +---------------+ Recv VN_connect; +-------------------+ 416 |VN_Disconnected| return Local_Tag value |VN_Connected | 417 +---------------+ for VN if successful; +-------------------+ 418 |VN_ID; |-------------------------->|VN_ID; | 419 |VN_State= | |VN_State=connected;| 420 |disconnected; | |Num_TSI_Associated;| 421 | |<----Recv VN_disconnect----|Local_Tag; | 422 +---------------+ |VN_Context; | 423 +-------------------+ 425 Figure 5 State Transition Example of a VAP Instance 426 on an External NVE 428 Figure 5 shows the state transition for a VAP on the external NVE. An 429 NVE that supports the hypervisor to NVE control plane protocol should 430 support one instance of the state machine for each active VN. The 431 state transition on the external NVE is normally triggered by the 432 hypervisor-facing side events and behaviors. Some of the interleaved 433 interaction between NVE and NVA will be illustrated for better 434 understanding of the whole procedure; while others of them may not be 435 shown. More detailed information regarding that is available in [I- 436 D.ietf-nvo3-nve-nva-cp-req]. 438 The external NVE must be notified when an End Device requires 439 connection to a particular VN and when it no longer requires 440 connection. In addition, the external NVE must provide a local tag 441 value for each connected VN to the End Device to use for exchange of 442 packets between the End Device and the external NVE (e.g. a locally 443 significant 802.1Q tag value). How "local" the significance is 444 depends on whether the Hypervisor has a direct physical connection to 445 the external NVE (in which case the significance is local to the 446 physical link), or whether there is an Ethernet switch (e.g. a blade 447 switch) connecting the Hypervisor to the NVE (in which case the 448 significance is local to the intervening switch and all the links 449 connected to it). 451 These VLAN tags are used to differentiate between different VNs as 452 packets cross the shared access network to the external NVE. When the 453 external NVE receives packets, it uses the VLAN tag to identify the 454 VN of packets coming from a given TSI, strips the tag, and adds the 455 appropriate overlay encapsulation for that VN and sends it towards 456 the corresponding remote NVE across the underlying IP network. 458 The Identification of the VN in this protocol could either be through 459 a VN Name or a VN ID. A globally unique VN Name facilitates 460 portability of a Tenant's Virtual Data Center. Once an external NVE 461 receives a VN connect indication, the NVE needs a way to get a VN 462 Context allocated (or receive the already allocated VN Context) for a 463 given VN Name or ID (as well as any other information needed to 464 transmit encapsulated packets). How this is done is the subject of 465 the NVE-to-NVA protocol which are part of work items 1 and 2 in 466 [RFC7364]. 468 VN_connect message can be explicit or implicit. Explicit means the 469 hypervisor sending a message explicitly to request for the connection 470 to a VN. Implicit means the external NVE receives other messages, 471 e.g. very first TSI associate message (see the next subsection) for a 472 given VN, to implicitly indicate its interest to connect to a VN. 474 A VN_disconnect message will indicate that the NVE can release all 475 the resources for that disconnected VN and transit to VN_disconnected 476 state. The local tag assigned for that VN can possibly be reclaimed 477 by other VN. 479 3.2 TSI Associate and Activate 481 Typically, a TSI is assigned a single MAC address and all frames 482 transmitted and received on that TSI use that single MAC address. As 483 mentioned earlier, it is also possible for a Tenant System to 484 exchange frames using multiple MAC addresses or packets with multiple 485 IP addresses. 487 Particularly in the case of a TS that is forwarding frames or packets 488 from other TSs, the external NVE will need to communicate the mapping 489 between the NVE's IP address (on the underlying network) and ALL the 490 addresses the TS is forwarding on behalf of for the corresponding VN 491 to the NVA. 493 The NVE has two ways in which it can discover the tenant addresses 494 for which frames must be forwarded to a given End Device (and 495 ultimately to the TS within that End Device). 497 1. It can glean the addresses by inspecting the source addresses in 498 packets it receives from the End Device. 500 2. The hypervisor can explicitly signal the address associations of 501 a TSI to the external NVE. The address association includes all the 502 MAC and/or IP addresses possibly used as source addresses in a packet 503 sent from the hypervisor to external NVE. The external NVE may 504 further use this information to filter the future traffic from the 505 hypervisor. 507 To perform the second approach above, the "hypervisor-to-NVE" 508 protocol requires a means to allow End Devices to communicate new 509 tenant addresses associations for a given TSI within a given VN. 511 Figure 6 shows the example of a state transition for a TSI connecting 512 to a VAP on the external NVE. An NVE that supports the hypervisor to 513 NVE control plane protocol may support one instance of the state 514 machine for each TSI connecting to a given VN. 516 disassociate; +--------+ disassociate 517 +--------------->| Init |<--------------------+ 518 | +--------+ | 519 | | | | 520 | | | | 521 | +--------+ | 522 | | | | 523 | associate | | activate | 524 | +-----------+ +-----------+ | 525 | | | | 526 | | | | 527 | \|/ \|/ | 528 +--------------------+ +---------------------+ 529 | Associated | | Activated | 530 +--------------------+ +---------------------+ 531 |TSI_ID; | |TSI_ID; | 532 |Port; |-----activate---->|Port; | 533 |VN_ID; | |VN_ID; | 534 |State=associated; | |State=activated ; |-+ 535 +-|Num_Of_Addr; |<---deactivate;---|Num_Of_Addr; | | 536 | |List_Of_Addr; | |List_Of_Addr; | | 537 | +--------------------+ +---------------------+ | 538 | /|\ /|\ | 539 | | | | 540 +---------------------+ +-------------------+ 541 add/remove/updt addr; add/remove/updt addr; 542 or update port; or update port; 544 Figure 6 State Transition Example of a TSI Instance 545 on an External NVE 547 Associated state of a TSI instance on an external NVE indicates all 548 the addresses for that TSI have already associated with the VAP of 549 the external NVE on port p for a given VN but no real traffic to and 550 from the TSI is expected and allowed to pass through. An NVE has 551 reserved all the necessary resources for that TSI. An external NVE 552 may report the mappings of its' underlay IP address and the 553 associated TSI addresses to NVA and relevant network nodes may save 554 such information to its mapping table but not forwarding table. A NVE 555 may create ACL or filter rules based on the associated TSI addresses 556 on the attached port p but not enable them yet. Local tag for the VN 557 corresponding to the TSI instance should be provisioned on port p to 558 receive packets. 560 VM migration event(discussed section 2) may cause the hypervisor to 561 send an associate message to the NVE connected to the destination 562 hypervisor the VM migrates to. VM creation event may also lead to the 563 same practice. 565 The Activated state of a TSI instance on an external NVE indicates 566 that all the addresses for that TSI functioning correctly on port p 567 and traffic can be received from and sent to that TSI via the NVE. 568 The mappings of the NVE's underlay IP address and the associated TSI 569 addresses should be put into the forwarding table rather than the 570 mapping table on relevant network nodes. ACL or filter rules based on 571 the associated TSI addresses on the attached port p in NVE are 572 enabled. Local tag for the VN corresponding to the TSI instance MUST 573 be provisioned on port p to receive packets. 575 The Activate message makes the state transit from Init or Associated 576 to Activated. VM creation, VM migration and VM resumption events 577 discussed in section 4 may trigger the Activate message to be sent 578 from the hypervisor to the external NVE. 580 TSI information may get updated either in Associated or Activated 581 state. The following are considered updates to the TSI information: 582 add or remove the associated addresses, update current associated 583 addresses (for example updating IP for a given MAC), update NVE port 584 information based on where the NVE receives messages. Such updates do 585 not change the state of TSI. When any address associated to a given 586 TSI changes, the NVE should inform the NVA to update the mapping 587 information on NVE's underlying address and the associated TSI 588 addresses. The NVE should also change its local ACL or filter 589 settings accordingly for the relevant addresses. Port information 590 update will cause the local tag for the VN corresponding to the TSI 591 instance to be provisioned on new port p and removed from the old 592 port. 594 3.3 TSI Disassociate and Deactivate 596 Disassociate and deactivate conceptually are the reverse behaviors of 597 associate and activate. From Activated state to Associated state, the 598 external NVE needs to make sure the resources are still reserved but 599 the addresses associated to the TSI are not functioning and no 600 traffic to and from the TSI is expected and allowed to pass through. 601 For example, the NVE needs to inform the NVA to remove the relevant 602 addresses mapping information from forwarding or routing table. ACL 603 or filtering rules regarding the relevant addresses should be 604 disabled. From Associated or Activated state to the Init state, the 605 NVE will release all the resources relevant to TSI instances. The NVE 606 should also inform the NVA to remove the relevant entries from 607 mapping table. ACL or filtering rules regarding the relevant 608 addresses should be removed. Local tag provisioning on the connecting 609 port on NVE should be cleared. 611 A VM suspension event(discussed in section 2) may cause the relevant 612 TSI instance(s) on the NVE to transit from Activated to Associated 613 state. A VM pause event normally does not affect the state of the 614 relevant TSI instance(s) on the NVE as the VM is expected to run 615 again soon. The VM shutdown event will normally cause the relevant 616 TSI instance(s) on NVE transit to Init state from Activated state. 617 All resources should be released. 619 A VM migration will lead the TSI instance on the source NVE to leave 620 Activated state. When a VM migrates to another hypervisor connecting 621 to the same NVE, i.e. source and destination NVE are the same, NVE 622 should use TSI_ID and incoming port to differentiate two TSI 623 instance. 625 Although the triggering messages for state transition shown in Figure 626 6 does not indicate the difference between VM creation/shutdown event 627 and VM migration arrival/departure event, the external NVE can make 628 optimizations if it is notified of such information. For example, if 629 the NVE knows the incoming activate message is caused by migration 630 rather than VM creation, some mechanisms may be employed or triggered 631 to make sure the dynamic configurations or provisionings on the 632 destination NVE are the same as those on the source NVE for the 633 migrated VM. For example IGMP query [RFC2236] can be triggered by the 634 destination external NVE to the migrated VM on destination hypervisor 635 so that the VM is forced to answer an IGMP report to the multicast 636 router. Then multicast router can correctly send the multicast 637 traffic to the new external NVE for those multicast groups the VM had 638 joined before the migration. 640 4. Hypervisor-to-NVE Control Plane Protocol Requirements 642 Req-1: The protocol MUST support a bridged network connecting End 643 Devices to External NVE. 645 Req-2: The protocol MUST support multiple End Devices sharing the 646 same External NVE via the same physical port across a bridged 647 network. 649 Req-3: The protocol MAY support an End Device using multiple external 650 NVEs simultaneously, but only one external NVE for each VN. 652 Req-4: The protocol MAY support an End Device using multiple external 653 NVEs simultaneously for the same VN. 655 Req-5: The protocol MUST allow the End Device initiating a request to 656 its associated External NVE to be connected/disconnected to a given 657 VN. 659 Req-6: The protocol MUST allow an External NVE initiating a request 660 to its connected End Devices to be disconnected to a given VN. 662 Req-7: When a TS attaches to a VN, the protocol MUST allow for an End 663 Device and its external NVE to negotiate a locally-significant tag 664 for carrying traffic associated with a specific VN (e.g., 802.1Q 665 tags). 667 Req-8: The protocol MUST allow an End Device initiating a request to 668 associate/disassociate and/or activate/deactive address(es) of a TSI 669 instance to a VN on an NVE port. 671 Req-9: The protocol MUST allow the External NVE initiating a request 672 to disassociate and/or deactivate address(es) of a TSI instance to a 673 VN on an NVE port. 675 Req-10: The protocol MUST allow an End Device initiating a request to 676 add, remove or update address(es) associated with a TSI instance on 677 the external NVE. Addresses can be expressed in different formats, 678 for example, MAC, IP or pair of IP and MAC. 680 Req-11: The protocol MUST allow the External NVE to authenticate the 681 End Device connected. 683 Req-12: The protocol MUST be able to run over L2 links between the 684 End Device and its External NVE. 686 Req-13: The protocol SHOULD support the End Device indicating if an 687 associate or activate request from it results from a VM hot migration 688 event. 690 VDP [IEEE 802.1Qbg] is a candidate protocol running on layer 2. 691 Appendix A illustrates VDP for reader's information. It requires 692 extensions to fulfill the requirements in this document. 694 5. Security Considerations 696 NVEs must ensure that only properly authorized Tenant Systems are 697 allowed to join and become a part of any specific Virtual Network. In 698 addition, NVEs will need appropriate mechanisms to ensure that any 699 hypervisor wishing to use the services of an NVE are properly 700 authorized to do so. One design point is whether the hypervisor 701 should supply the NVE with necessary information (e.g., VM addresses, 702 VN information, or other parameters) that the NVE uses directly, or 703 whether the hypervisor should only supply a VN ID and an identifier 704 for the associated VM (e.g., its MAC address), with the NVE using 705 that information to obtain the information needed to validate the 706 hypervisor-provided parameters or obtain related parameters in a 707 secure manner. 709 6. IANA Considerations 711 No IANA action is required. RFC Editor: please delete this section 712 before publication. 714 7. Acknowledgements 716 This document was initiated and merged from the drafts draft-kreeger- 717 nvo3-hypervisor-nve-cp, draft-gu-nvo3-tes-nve-mechanism and draft- 718 kompella-nvo3-server2nve. Thanks to all the co-authors and 719 contributing members of those drafts. 721 The authors would like to specially thank Jon Hudson for his generous 722 help in improving the readability of this document. 724 8. References 726 8.1 Normative References 728 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 729 Requirement Levels", BCP 14, RFC 2119, March 1997. 731 8.2 Informative References 733 [RFC7364] Narten, T., Gray, E., Black, D., Fang, L., Kreeger, L., and 734 M. Napierala, "Problem Statement: Overlays for Network 735 Virtualization", October 2014. 737 [RFC7365] Lasserre, M., Balus, F., Morin, T., Bitar, N., and Y. 738 Rekhter, "Framework for DC Network Virtualization", 739 October 2014. 741 [I-D.ietf-nvo3-nve-nva-cp-req] Kreeger, L., Dutt, D., Narten, T., and 742 D. Black, "Network Virtualization NVE to NVA Control 743 Protocol Requirements", draft-ietf-nvo3-nve-nva-cp-req-01 744 (work in progress), October 2013. 746 [I-D.ietf-nvo3-arch] Black, D., Narten, T., et al, "An Architecture 747 for Overlay Networks (NVO3)", draft-narten-nvo3-arch, work 748 in progress. 750 [I-D.ietf-opsawg-vmm-mib] Asai H., MacFaden M., Schoenwaelder J., 751 Shima K., Tsou T., "Management Information Base for 752 Virtual Machines Controlled by a Hypervisor", draft-ietf- 753 opsawg-vmm-mib-00 (work in progress), February 2014. 755 [IEEE 802.1Qbg] IEEE, "Media Access Control (MAC) Bridges and Virtual 756 Bridged Local Area Networks - Amendment 21: Edge Virtual 757 Bridging", IEEE Std 802.1Qbg, 2012 759 [8021Q] IEEE, "Media Access Control (MAC) Bridges and Virtual Bridged 760 Local Area Networks", IEEE Std 802.1Q-2011, August, 2011 762 Appendix A. IEEE 802.1Qbg VDP Illustration (For information only) 764 VDP has the format shown in Figure A.1. Virtual Station Interface (VSI) 765 is an interface to a virtual station that is attached to a downlink port 766 of an internal bridging function in server. VSI's VDP packet will be 767 handled by an external bridge. VDP is the controlling protocol running 768 between the hypervisor and the external bridge. 770 +--------+--------+------+----+----+------+------+------+-----------+ 771 |TLV type|TLV info|Status|VSI |VSI |VSIID | VSIID|Filter|Filter Info| 772 | 7b |str len | |Type|Type|Format| | Info | | 773 | | 9b | 1oct |ID |Ver | | |format| | 774 | | | |3oct|1oct| 1oct |16oct |1oct | M oct | 775 +--------+--------+------+----+----+------+------+------+-----------+ 776 | | | | | 777 | | |<--VSI type&instance-->|<----Filter------>| 778 | | |<------------VSI attributes-------------->| 779 |<--TLV header--->|<-------TLV info string = 23 + M octets--------->| 781 Figure A.1: VDP TLV definitions 783 There are basically four TLV types. 785 1. Pre-Associate: Pre-Associate is used to pre-associate a VSI instance 786 with a bridge port. The bridge validates the request and returns a 787 failure Status in case of errors. Successful pre-association does not 788 imply that the indicated VSI Type or provisioning will be applied to any 789 traffic flowing through the VSI. The pre-associate enables faster 790 response to an associate, by allowing the bridge to obtain the VSI Type 791 prior to an association. 793 2. Pre-Associate with resource reservation: Pre-Associate with Resource 794 Reservation involves the same steps as Pre-Associate, but on successful 795 pre-association also reserves resources in the Bridge to prepare for a 796 subsequent Associate request. 798 3. Associate: The Associate creates and activates an association between 799 a VSI instance and a bridge port. The Bridge allocates any required 800 bridge resources for the referenced VSI. The Bridge activates the 801 configuration for the VSI Type ID. This association is then applied to 802 the traffic flow to/from the VSI instance. 804 4. Deassociate: The de-associate is used to remove an association 805 between a VSI instance and a bridge port. Pre-Associated and Associated 806 VSIs can be de-associated. De-associate releases any resources that were 807 reserved as a result of prior Associate or Pre-Associate operations for 808 that VSI instance. 810 Deassociate can be initiated by either side and the rest types of 811 messages can only be initiated by the server side. 813 Some important flag values in VDP Status field: 815 1. M-bit (Bit 5): Indicates that the user of the VSI (e.g., the VM) is 816 migrating (M-bit = 1) or provides no guidance on the migration of the 817 user of the VSI (M-bit = 0). The M-bit is used as an indicator relative 818 to the VSI that the user is migrating to. 820 2. S-bit (Bit 6): Indicates that the VSI user (e.g., the VM) is 821 suspended (S-bit = 1) or provides no guidance as to whether the user of 822 the VSI is suspended (S-bit = 0). A keep-alive Associate request with 823 S-bit = 1 can be sent when the VSI user is suspended. The S-bit is used 824 as an indicator relative to the VSI that the user is migrating from. 826 The filter information format currently supports 4 types as the 827 following. 829 1. VID Filter Info format 830 +---------+------+-------+--------+ 831 | #of | PS | PCP | VID | 832 |entries |(1bit)|(3bits)|(12bits)| 833 |(2octets)| | | | 834 +---------+------+-------+--------+ 835 |<--Repeated per entry->| 837 Figure A.2 VID Filter Info format 839 2. MAC/VID filter format 840 +---------+--------------+------+-------+--------+ 841 | #of | MAC address | PS | PCP | VID | 842 |entries | (6 octets) |(1bit)|(3bits)|(12bits)| 843 |(2octets)| | | | | 844 +---------+--------------+------+-------+--------+ 845 |<--------Repeated per entry---------->| 847 Figure A.3 MAC/VID filter format 849 3. GroupID/VID filter format 850 +---------+--------------+------+-------+--------+ 851 | #of | GroupID | PS | PCP | VID | 852 |entries | (4 octets) |(1bit)|(3bits)|(12bits)| 853 |(2octets)| | | | | 854 +---------+--------------+------+-------+--------+ 855 |<--------Repeated per entry---------->| 857 Figure A.4 GroupID/VID filter format 859 4. GroupID/MAC/VID filter format 860 +---------+----------+-------------+------+-----+--------+ 861 | #of | GroupID | MAC address | PS | PCP | VID | 862 |entries |(4 octets)| (6 octets) |(1bit)|(3b )|(12bits)| 863 |(2octets)| | | | | | 864 +---------+----------+-------------+------+-----+--------+ 865 |<-------------Repeated per entry------------->| 866 Figure A.5 GroupID/MAC/VID filter format 868 The null VID can be used in the VDP Request sent from the hypervisor to 869 the external bridge. Use of the null VID indicates that the set of VID 870 values associated with the VSI is expected to be supplied by the Bridge. 871 The Bridge can obtain VID values from the VSI Type whose identity is 872 specified by the VSI Type information in the VDP Request. The set of VID 873 values is returned to the station via the VDP Response. The returned VID 874 value can be a locally significant value. When GroupID is used, it is 875 equivalent to the VN ID in NVO3. GroupID will be provided by the 876 hypervisor to the bridge. The bridge will map GroupID to a locally 877 significant VLAN ID. 879 The VSIID in VDP request that identify a VM can be one of the following 880 format: IPV4 address, IPV6 address, MAC address, UUID or locally 881 defined. 883 Authors' Addresses 884 Yizhou Li 885 Huawei Technologies 886 101 Software Avenue, 887 Nanjing 210012 888 China 890 Phone: +86-25-56625409 891 EMail: liyizhou@huawei.com 893 Lucy Yong 894 Huawei Technologies, USA 896 Email: lucy.yong@huawei.com 898 Lawrence Kreeger 899 Cisco 901 Email: kreeger@cisco.com 903 Thomas Narten 904 IBM 906 Email: narten@us.ibm.com 907 David Black 908 EMC 910 Email: david.black@emc.com