idnits 2.17.00 (12 Aug 2021) /tmp/idnits3173/draft-morton-bmwg-multihome-evpn-04.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document seems to lack the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords -- however, there's a paragraph with a matching beginning. Boilerplate error? (The document does seem to have the reference to RFC 2119 which the ID-Checklist requires). -- The document date (November 2, 2020) is 558 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'RFC7432' is mentioned on line 110, but not defined == Unused Reference: 'RFC5180' is defined on line 467, but no explicit reference was found in the text == Unused Reference: 'RFC6201' is defined on line 472, but no explicit reference was found in the text == Unused Reference: 'RFC6985' is defined on line 488, but no explicit reference was found in the text == Unused Reference: 'OPNFV-2017' is defined on line 499, but no explicit reference was found in the text == Unused Reference: 'RFC8239' is defined on line 506, but no explicit reference was found in the text == Unused Reference: 'VSPERF-b2b' is defined on line 517, but no explicit reference was found in the text == Unused Reference: 'VSPERF-BSLV' is defined on line 523, but no explicit reference was found in the text ** Obsolete normative reference: RFC 1944 (Obsoleted by RFC 2544) Summary: 1 error (**), 0 flaws (~~), 10 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group A. Morton 3 Internet-Draft J. Uttaro 4 Updates: ???? (if approved) AT&T Labs 5 Intended status: Informational November 2, 2020 6 Expires: May 6, 2021 8 Benchmarks and Methods for Multihomed EVPN 9 draft-morton-bmwg-multihome-evpn-04 11 Abstract 13 Fundamental Benchmarking Methodologies for Network Interconnect 14 Devices of interest to the IETF are defined in RFC 2544. Key 15 benchmarks applicable to restoration and multi-homed sites are in RFC 16 6894. This memo applies these methods to Multihomed nodes 17 implemented on Ethernet Virtual Private Networks (EVPN). 19 Requirements Language 21 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 22 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 23 "OPTIONAL" in this document are to be interpreted as described in BCP 24 14[RFC2119] [RFC8174] when, and only when, they appear in all 25 capitals, as shown here. 27 Status of This Memo 29 This Internet-Draft is submitted in full conformance with the 30 provisions of BCP 78 and BCP 79. 32 Internet-Drafts are working documents of the Internet Engineering 33 Task Force (IETF). Note that other groups may also distribute 34 working documents as Internet-Drafts. The list of current Internet- 35 Drafts is at https://datatracker.ietf.org/drafts/current/. 37 Internet-Drafts are draft documents valid for a maximum of six months 38 and may be updated, replaced, or obsoleted by other documents at any 39 time. It is inappropriate to use Internet-Drafts as reference 40 material or to cite them other than as "work in progress." 42 This Internet-Draft will expire on May 6, 2021. 44 Copyright Notice 46 Copyright (c) 2020 IETF Trust and the persons identified as the 47 document authors. All rights reserved. 49 This document is subject to BCP 78 and the IETF Trust's Legal 50 Provisions Relating to IETF Documents 51 (https://trustee.ietf.org/license-info) in effect on the date of 52 publication of this document. Please review these documents 53 carefully, as they describe your rights and restrictions with respect 54 to this document. Code Components extracted from this document must 55 include Simplified BSD License text as described in Section 4.e of 56 the Trust Legal Provisions and are provided without warranty as 57 described in the Simplified BSD License. 59 Table of Contents 61 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 62 2. Scope and Goals . . . . . . . . . . . . . . . . . . . . . . . 3 63 3. Motivation . . . . . . . . . . . . . . . . . . . . . . . . . 3 64 4. Test Setups . . . . . . . . . . . . . . . . . . . . . . . . . 3 65 4.1. Basic Configuration . . . . . . . . . . . . . . . . . . . 5 66 5. Procedure for Full Mesh Throughput Characterization . . . . . 6 67 5.1. Address Learning Phase . . . . . . . . . . . . . . . . . 6 68 5.2. Test for a Single Frame Size and Number of Unicast Flows 6 69 5.3. Detailed Procedure . . . . . . . . . . . . . . . . . . . 6 70 5.4. Test Repetition . . . . . . . . . . . . . . . . . . . . . 7 71 5.5. Benchmark Calculations . . . . . . . . . . . . . . . . . 7 72 5.6. Reporting . . . . . . . . . . . . . . . . . . . . . . . . 7 73 6. Procedure for Mass Withdrawal Characterization . . . . . . . 7 74 6.1. Address Learning Phase . . . . . . . . . . . . . . . . . 8 75 6.2. Test for a Single Frame Size and Number of Flows . . . . 8 76 6.3. Test Repetition . . . . . . . . . . . . . . . . . . . . . 8 77 6.4. Benchmark Calculations . . . . . . . . . . . . . . . . . 8 78 7. Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . 9 79 8. Security Considerations . . . . . . . . . . . . . . . . . . . 9 80 9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 10 81 10. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 10 82 11. References . . . . . . . . . . . . . . . . . . . . . . . . . 10 83 11.1. Normative References . . . . . . . . . . . . . . . . . . 10 84 11.2. Informative References . . . . . . . . . . . . . . . . . 11 85 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 12 87 1. Introduction 89 The IETF's fundamental Benchmarking Methodologies are defined 90 in[RFC2544], supported by the terms and definitions in [RFC1242], and 91 [RFC2544] actually obsoletes an earlier specification, [RFC1944]. 93 This memo recognizes the importance of Ethernet Virtual Private 94 Network (EVPN) Multihoming connectivity scenarios, where a CE device 95 is connected to 2 or more PEs using an instance of an Ethernet 96 Segment. 98 In an all-active or Active-Active scenario, CE-PE traffic is load- 99 balanced across two or more PEs. 101 Mass-withdrawal of routes may take place when an autodiscovery route 102 is used on a per Ethernet Segment basis, and there is a link failure 103 on one of the Ethernet Segment links (or when configuration changes 104 take place). 106 Although EVPN depends on address-learning in the control-plane, the 107 Ethernet Segment Instance is permitted to use "the method best suited 108 to the CE: data-plane learning, IEEE 802.1x, the Link Layer Discovery 109 Protocol (LLDP), IEEE 802.1aq, Address Resolution Protocol (ARP), 110 management plane, or other protocols" [RFC7432]. 112 This memo seeks to benchmark these important cases (and others). 114 2. Scope and Goals 116 The scope of this memo is to define a method to unambiguously perform 117 tests, measure the benchmark(s), and report the results for Capacity 118 of EVPN Multihoming connectivity scenarios, and other key restoration 119 activities (such as address withdrawl) covering link failure in the 120 Active-Active scenario. 122 The goal is to provide more efficient test procedures where possible, 123 and to expand reporting with additional interpretation of the 124 results. The tests described in this memo address some key 125 multihoming scenarios implemented on a Device Under Test (DUT) or 126 System Under Test (SUT). 128 3. Motivation 130 The Multihoming scenarios described in this memo emphasize features 131 with practical value to the industry that have seen deployment. 132 Therefore, these scenarios deserve further attention that follows 133 from benchmarking activities and further study. 135 4. Test Setups 137 For simple Capacity/Throughput Benchmarks, the Test Setup MUST be 138 consistent with Figure 1 of [RFC2544], or Figure 2 when the tester's 139 sender and receiver are different devices. 141 +--------+ ,-----. +--------+ 142 | | / \ | | 143 | | /( PE ....| | 144 | | / \ 1 / | | 145 | Test | ,-----. / `-----' | Test | 146 | | / \ / | | 147 | Device |...( CE X | Device | 148 | | \ 1 / \ | | 149 | | `-----' \ ,-----. | | 150 | | \ / \ | | 151 | | \( PE ....| | 152 +--------+ \ 2 / +--------+ 153 `-----' 155 Figure 1 SUT for Throughput and other Ethernet Segment Tests 157 In Figure 1, the System Under Test (SUT) is comprised of a single CE 158 device and two or more PE devices. 160 The tester SHALL be connected to all CE and every PE, and be capable 161 of simultaneously sending and receiving frames on all ports with 162 connectivity. The tester SHALL be capable of generating multiple 163 flows (according to a 5-tuple definition, or any sub-set of the 164 5-tuple). The tester SHALL be able to control the IP capacity of 165 sets of individual flows, and the presence of sets of flows on 166 specific interface ports. 168 The tester SHALL be capable of generating and receiving a full mesh 169 of Unicast flows, as described in section 3.0 of [RFC2889]: 171 "In fully meshed traffic, each interface of a DUT/SUT is set up to 172 both receive and transmit frames to all the other interfaces under 173 test." 175 Other mandatory testing aspects described in [RFC2544] and [RFC2889] 176 MUST be included, unless explicitly modified in the next section. 178 The ingress and egress link speeds and link layer protocols MUST be 179 specified and used to compute the maximum theoretical frame rate when 180 respecting the minimum inter-frame gap. 182 A second test case is where a BGP backbone implements MPLS-LDP to 183 provide connectivity between multiple PE - ESI - CE locations. 185 Test Test 186 Device Device 187 EVI-1 188 +---+ ,-----. +---+ 189 | | / \ | | 190 | | /( PE ..... ESI | | 191 | | / \ 1 / \ EVI-1 0 | | 192 | | MAC ,-----. / `-----' \ ,-----. +--+ | | 193 | | A / \ / \ / \ | | | | 194 | |...( CE X ESI 1 X...( PE ...|CE|.| | 195 | | \ 1 / \ / \ 3 / | 2| | | 196 | | `-----' \ ,-----. / `-----' +--+ | | 197 | | \ / \ / | | 198 | | \( PE ..../ | | 199 +---+ \ 2 / +---+ 200 `-----' 201 EVI-1 203 Figure 2 SUT with BGP & MPLS interconnecting multiple PE-ESI-CE 204 locations 206 PE1 learns MAC A via data plane learning, PE1 and PE2 share ESI 1 ( 207 Ethernet Segment Identifier ) and advertise an Ether A-D route with 208 ESI 1 to PE3, PE1 also advertises MAC A to PE3. PE3 instantiates 209 either Active/Backup or Active/Active towards PE1 and PE2 ( Assume 210 PE1 is Active in Active/Backup scenario ) for MAC A. 212 All Link speeds MUST be reported, along with complete device 213 configurations in the SUT and Test Device(s). 215 Additional Test Setups and configurations will be provided in this 216 section, after review. 218 One capacity benchmark pertains to the number of ESIs that a network 219 with multiple PE - ESI - CE locations can support. 221 4.1. Basic Configuration 223 This configuration serves as the base configuration for all test 224 cases. 226 All routers except CE are configured with OSPF/IS-IS,LDP,MPLS,BGP 227 with EVPN address family. 229 All routers except CE must have IBGP configured. 231 PE1,PE2,PE3 must be configured with an EVI context ( EVI 1 ). 233 PE1 and PE2 must be configured with a non-zero ESI indicating that 234 the two VLANS coming from CE1 belong to the same ethernet segment ( 235 ESI 1 ). 237 PE1 and PE2 are running Single Active mode of EVPN. 239 CE1 and CE2 are acting as bridges configured with VLANS that are 240 configured on PE1, PE2, PE3. 242 In [RFC2889] procedures that follow, the test traffic will be 243 bidirectional. 245 5. Procedure for Full Mesh Throughput Characterization 247 Objective: To characterize the ability of a DUT/SUT to process frames 248 between CE and one or more PEs in a multihomed connectivity scenario. 249 Figure 1 gives the least-complex test setup. Figure 2 gives a 250 possible alternative with full BGP and MPLS interconnection. 252 The Procedure follows. 254 5.1. Address Learning Phase 256 "For every address, learning frames MUST be sent to the DUT/SUT to 257 allow the DUT/SUT to update its address tables properly." [RFC2889] 259 5.2. Test for a Single Frame Size and Number of Unicast Flows 261 Each trial in the test requires configuring a number of flows (from 262 100 to 100k) and a fixed frame size (64 octets to 128, 256, 512, 263 1024, 1280 and 1518 bytes, as per [RFC2544]). Frame formats MUST be 264 specified, they are as described in section 4 of [RFC2889]. 266 Only one of frame size and number of flows SHALL change for each 267 test. 269 5.3. Detailed Procedure 271 The Procedure SHALL follow section 5.1 of [RFC2889]. 273 Specifically, the Throughput measurement parameters found in section 274 5.1.2 of [RFC2889] SHALL be configured and reported with the results. 276 The procedure for transmitting Frames on each port is described in 277 section 5.1.3 of [RFC2889] and SHALL be followed (adapting to the 278 number of ports in the test setup). 280 Once the traffic is started, the procedure for Measurements described 281 in section 5.1.4 of [RFC2889] SHALL be followed (adapting to the 282 number of ports in the test setup). The section on Throughput 283 measurement (5.1.4 of [RFC2889]) SHALL be followed. 285 In the case that one or more of the CE and PE are virtual 286 implementations, then the search algorithm of [TST009] that provides 287 consistent results when faced with host transient activity SHOULD be 288 used (Binary Search with Loss Verification). 290 5.4. Test Repetition 292 The test MUST be repeated N times for each frame size in the subset 293 list, and each Throughput value made available for further processing 294 (below). 296 5.5. Benchmark Calculations 298 For each Frame size and number of flows, calculate the following 299 summary statistics for Throughput values over the N tests: 301 o Average (Benchmark) 303 o Minimum 305 o Maximum 307 o Standard Deviation 309 Comparison will determine how the load was balanced among PEs. 311 5.6. Reporting 313 The recommendation for graphical reporting provided in Section 5.1.4 314 of [RFC2889]) SHOULD be followed, along with the specifications in 315 Section 7 below. 317 6. Procedure for Mass Withdrawal Characterization 319 Objective: To characterize the ability of a DUT/SUT to process frames 320 between CE and one or more PE in a multihomed connectivity scenario 321 when a mass withdrawal takes place. Figure 2 gives the test setup. 323 The Procedure follows. 325 6.1. Address Learning Phase 327 "For every address, learning frames MUST be sent to the DUT/SUT to 328 allow the DUT/SUT update its address tables properly." [RFC2889] 330 6.2. Test for a Single Frame Size and Number of Flows 332 Each trial in the test requires configuring a number of flows (from 333 100 to 100k) and a fixed frame size (64 octets to 128, 256, 512, 334 1024, 1280 and 1518 bytes, as per [RFC2544]). 336 Only one of frame size and number of flows SHALL change for each 337 test. 339 The Offered Load SHALL be transmitted at the Throughput level 340 corresponding to the level previously determined for the selected 341 Frame size and number of Flows in use (see section 5). 343 The Procedure SHALL follow section 5.1 of [RFC2889] (except there is 344 no need to search for the Throughput level). See section 5 above for 345 additional requirements, especially section 5.3. 347 When traffic has been sent for 5 seconds one of the CE-PE links on 348 the ESI SHALL be disabled, and the time of this action SHALL be 349 recorded for further calculations. For example, if the CE1 link to 350 PE1 is disabled, this should trigger a Mass withdrawal of EVI-1 351 addresses, and the subsequent re-routing of traffic to PE2. 353 Frame losses are expected to be recorded during the restoration time. 354 Time for restoration may be estimated as described in section 3.5 355 of[RFC6412]. 357 6.3. Test Repetition 359 The test MUST be repeated N times for each frame size in the subset 360 list, and each restoration time value made available for further 361 processing (below). 363 6.4. Benchmark Calculations 365 For each Frame size and number of flows, calculate the following 366 summary statistics for Loss (or Time to return to Throughput level 367 after restoration) values over the N tests: 369 o Average (Benchmark) 371 o Minimum 372 o Maximum 374 o Standard Deviation 376 7. Reporting 378 The results SHOULD be reported in the format of a table with a row 379 for each of the tested frame sizes and Number of Flows. There SHOULD 380 be columns for the frame size with number of flows, and for the 381 resultant average frame count (or time) for each type of data stream 382 tested. 384 The number of tests Averaged for the Benchmark, N, MUST be reported. 386 The Minimum, Maximum, and Standard Deviation across all complete 387 tests SHOULD also be reported. 389 The Corrected DUT Restoration Time SHOULD also be reported, as 390 applicable. 392 +----------------+-------------------+----------------+-------------+ 393 | Frame Size, | Ave Benchmark, | Min,Max,StdDev | Calculated | 394 | octets + # | fps, frames or | | Time, Sec | 395 | Flows | time | | | 396 +----------------+-------------------+----------------+-------------+ 397 | 64,100 | 26000 | 25500,27000,20 | 0.00004 | 398 +----------------+-------------------+----------------+-------------+ 400 Throughput or Loss/Restoration Time Results 402 Static and configuration parameters: 404 Number of test repetitions, N 406 Minimum Step Size (during searches), in frames. 408 8. Security Considerations 410 Benchmarking activities as described in this memo are limited to 411 technology characterization using controlled stimuli in a laboratory 412 environment, with dedicated address space and the other constraints 413 [RFC2544]. 415 The benchmarking network topology will be an independent test setup 416 and MUST NOT be connected to devices that may forward the test 417 traffic into a production network, or misroute traffic to the test 418 management network. See [RFC6815]. 420 Further, benchmarking is performed on a "black-box" basis, relying 421 solely on measurements observable external to the DUT/SUT. 423 Special capabilities SHOULD NOT exist in the DUT/SUT specifically for 424 benchmarking purposes. Any implications for network security arising 425 from the DUT/SUT SHOULD be identical in the lab and in production 426 networks. 428 9. IANA Considerations 430 This memo makes no requests of IANA. 432 10. Acknowledgements 434 Thanks to Sudhin Jacob for his review and comments on the bmwg-list. 436 Thanks to Aman Shaikh for sharing his comments on the draft directly 437 with the authors. 439 11. References 441 11.1. Normative References 443 [RFC1242] Bradner, S., "Benchmarking Terminology for Network 444 Interconnection Devices", RFC 1242, DOI 10.17487/RFC1242, 445 July 1991, . 447 [RFC1944] Bradner, S. and J. McQuaid, "Benchmarking Methodology for 448 Network Interconnect Devices", RFC 1944, 449 DOI 10.17487/RFC1944, May 1996, 450 . 452 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 453 Requirement Levels", BCP 14, RFC 2119, 454 DOI 10.17487/RFC2119, March 1997, 455 . 457 [RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for 458 Network Interconnect Devices", RFC 2544, 459 DOI 10.17487/RFC2544, March 1999, 460 . 462 [RFC2889] Mandeville, R. and J. Perser, "Benchmarking Methodology 463 for LAN Switching Devices", RFC 2889, 464 DOI 10.17487/RFC2889, August 2000, 465 . 467 [RFC5180] Popoviciu, C., Hamza, A., Van de Velde, G., and D. 468 Dugatkin, "IPv6 Benchmarking Methodology for Network 469 Interconnect Devices", RFC 5180, DOI 10.17487/RFC5180, May 470 2008, . 472 [RFC6201] Asati, R., Pignataro, C., Calabria, F., and C. Olvera, 473 "Device Reset Characterization", RFC 6201, 474 DOI 10.17487/RFC6201, March 2011, 475 . 477 [RFC6412] Poretsky, S., Imhoff, B., and K. Michielsen, "Terminology 478 for Benchmarking Link-State IGP Data-Plane Route 479 Convergence", RFC 6412, DOI 10.17487/RFC6412, November 480 2011, . 482 [RFC6815] Bradner, S., Dubray, K., McQuaid, J., and A. Morton, 483 "Applicability Statement for RFC 2544: Use on Production 484 Networks Considered Harmful", RFC 6815, 485 DOI 10.17487/RFC6815, November 2012, 486 . 488 [RFC6985] Morton, A., "IMIX Genome: Specification of Variable Packet 489 Sizes for Additional Testing", RFC 6985, 490 DOI 10.17487/RFC6985, July 2013, 491 . 493 [RFC8174] Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC 494 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174, 495 May 2017, . 497 11.2. Informative References 499 [OPNFV-2017] 500 Cooper, T., Morton, A., and S. Rao, "Dataplane 501 Performance, Capacity, and Benchmarking in OPNFV", June 502 2017, 503 . 506 [RFC8239] Avramov, L. and J. Rapp, "Data Center Benchmarking 507 Methodology", RFC 8239, DOI 10.17487/RFC8239, August 2017, 508 . 510 [TST009] Morton, R. A., "ETSI GS NFV-TST 009 V3.2.1 (2019-06), 511 "Network Functions Virtualisation (NFV) Release 3; 512 Testing; Specification of Networking Benchmarks and 513 Measurement Methods for NFVI"", June 2019, 514 . 517 [VSPERF-b2b] 518 Morton, A., "Back2Back Testing Time Series (from CI)", 519 June 2017, . 523 [VSPERF-BSLV] 524 Morton, A. and S. Rao, "Evolution of Repeatability in 525 Benchmarking: Fraser Plugfest (Summary for IETF BMWG)", 526 July 2018, 527 . 531 Authors' Addresses 533 Al Morton 534 AT&T Labs 535 200 Laurel Avenue South 536 Middletown,, NJ 07748 537 USA 539 Phone: +1 732 420 1571 540 Fax: +1 732 368 1192 541 Email: acm@research.att.com 543 Jim Uttaro 544 AT&T Labs 545 200 Laurel Avenue South 546 Middletown,, NJ 07748 547 USA 549 Email: uttaro@att.com