idnits 2.17.00 (12 Aug 2021) /tmp/idnits45813/draft-lencse-v6ops-transition-scalability-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- == There are 2 instances of lines with non-RFC6890-compliant IPv4 addresses in the document. If these are example addresses, they should be changed. == There are 2 instances of lines with private range IPv4 addresses in the document. If these are generic example addresses, they should be changed to use any of the ranges defined in RFC 6890 (or successor): 192.0.2.x, 198.51.100.x or 203.0.113.x. == There are 1 instance of lines with non-RFC3849-compliant IPv6 addresses in the document. If these are example addresses, they should be changed. Miscellaneous warnings: ---------------------------------------------------------------------------- == The document doesn't use any RFC 2119 keywords, yet has text resembling RFC 2119 boilerplate text. -- The document date (7 March 2022) is 68 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-03) exists of draft-ietf-v6ops-transition-comparison-02 Summary: 0 errors (**), 0 flaws (~~), 5 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 v6ops G. Lencse 3 Internet-Draft Szechenyi Istvan University 4 Intended status: Informational 7 March 2022 5 Expires: 8 September 2022 7 Scalability of IPv6 Transition Technologies for IPv4aaS 8 draft-lencse-v6ops-transition-scalability-02 10 Abstract 12 Several IPv6 transition technologies have been developed to provide 13 customers with IPv4-as-a-Service (IPv4aaS) for ISPs with an IPv6-only 14 access and/or core network. All these technologies have their 15 advantages and disadvantages, and depending on existing topology, 16 skills, strategy and other preferences, one of these technologies may 17 be the most appropriate solution for a network operator. 19 This document examines the scalability of the five most prominent 20 IPv4aaS technologies (464XLAT, Dual Stack Lite, Lightweight 4over6, 21 MAP-E, MAP-T) considering two aspects: (1) how their performance 22 scales up with the number of CPU cores, (2) how their performance 23 degrades, when the number of concurrent sessions is increased until 24 hardware limit is reached. 26 Status of This Memo 28 This Internet-Draft is submitted in full conformance with the 29 provisions of BCP 78 and BCP 79. 31 Internet-Drafts are working documents of the Internet Engineering 32 Task Force (IETF). Note that other groups may also distribute 33 working documents as Internet-Drafts. The list of current Internet- 34 Drafts is at https://datatracker.ietf.org/drafts/current/. 36 Internet-Drafts are draft documents valid for a maximum of six months 37 and may be updated, replaced, or obsoleted by other documents at any 38 time. It is inappropriate to use Internet-Drafts as reference 39 material or to cite them other than as "work in progress." 41 This Internet-Draft will expire on 8 September 2022. 43 Copyright Notice 45 Copyright (c) 2022 IETF Trust and the persons identified as the 46 document authors. All rights reserved. 48 This document is subject to BCP 78 and the IETF Trust's Legal 49 Provisions Relating to IETF Documents (https://trustee.ietf.org/ 50 license-info) in effect on the date of publication of this document. 51 Please review these documents carefully, as they describe your rights 52 and restrictions with respect to this document. Code Components 53 extracted from this document must include Revised BSD License text as 54 described in Section 4.e of the Trust Legal Provisions and are 55 provided without warranty as described in the Revised BSD License. 57 Table of Contents 59 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 60 1.1. Requirements Language . . . . . . . . . . . . . . . . . . 3 61 2. Scalability of iptables . . . . . . . . . . . . . . . . . . . 3 62 2.1. Measurement Method . . . . . . . . . . . . . . . . . . . 3 63 2.2. Performance scale up against the number of CPU cores . . 4 64 2.3. Performance degradation caused by the number of 65 sessions . . . . . . . . . . . . . . . . . . . . . . . . 6 66 2.4. Connection tear down rate . . . . . . . . . . . . . . . . 8 67 3. Scalability of Jool . . . . . . . . . . . . . . . . . . . . . 9 68 3.1. Measurement Method . . . . . . . . . . . . . . . . . . . 10 69 3.2. Performance scale up against the number of CPU cores . . 10 70 3.3. Performance degradation caused by the number of 71 sessions . . . . . . . . . . . . . . . . . . . . . . . . 11 72 3.4. Connection tear down rate . . . . . . . . . . . . . . . . 12 73 4. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 13 74 5. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 13 75 6. Security Considerations . . . . . . . . . . . . . . . . . . . 13 76 7. References . . . . . . . . . . . . . . . . . . . . . . . . . 13 77 7.1. Normative References . . . . . . . . . . . . . . . . . . 13 78 7.2. Informative References . . . . . . . . . . . . . . . . . 14 79 Appendix A. Change Log . . . . . . . . . . . . . . . . . . . . . 15 80 A.1. 00 . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 81 A.2. 01 . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 82 Author's Address . . . . . . . . . . . . . . . . . . . . . . . . 15 84 1. Introduction 86 IETF has standardized several IPv6 transition technologies [LEN2019] 87 and occupied a neutral position trusting the selection of the most 88 appropriate ones to the market. 89 [I-D.ietf-v6ops-transition-comparison] provides a comprehensive 90 comparative analysis of the five most prominent IPv4aaS technologies 91 to assist operators with this problem. This document adds one more 92 detail: measurement data regarding the scalability of the examined 93 IPv4aaS technologies. 95 Currently, this document contains only the scalability measurements 96 of the iptables stateful NAT44 implementation. It serves as a sample 97 to test if the disclosed results are (1) useful and (2) sufficient 98 for the network operators. 100 1.1. Requirements Language 102 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 103 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 104 "OPTIONAL" in this document are to be interpreted as described in 105 BCP14 [RFC2119] [RFC8174] when, and only when, they appear in all 106 capitals, as shown here. 108 2. Scalability of iptables 110 2.1. Measurement Method 112 [RFC8219] has defined a benchmarking methodology for IPv6 transition 113 technologies. [I-D.lencse-bmwg-benchmarking-stateful] has amended it 114 by addressing how to benchmark stateful NATxy gateways using 115 pseudorandom port numbers recommended by [RFC4814]. It has defined a 116 measurement procedure for maximum connection establishment rate and 117 reused the classic measurement procedures like throughput, latency, 118 frame loss rate, etc. from [RFC8219]. We used two of them: maximum 119 connection establishment rate and throughput to characterize the 120 performance of the examined system. 122 The scalability of iptables is examined in two aspects: 124 * How its performance scales up with the number of CPU cores? 126 * How its performance degrades, when the number of concurrent 127 sessions is increased? 129 +--------------------------------------+ 130 10.0.0.2 |Initiator Responder| 198.19.0.2 131 +-------------| Tester |<------------+ 132 | private IPv4| [state table]| public IPv4 | 133 | +--------------------------------------+ | 134 | | 135 | +--------------------------------------+ | 136 | 10.0.0.1 | DUT: | 198.19.0.1 | 137 +------------>| Sateful NAT44 gateway |-------------+ 138 private IPv4| [connection tracking table] | public IPv4 139 +--------------------------------------+ 141 Figure 1: Test setup for benchmarking stateful NAT44 gateways 143 The test setup in Figure 1 was followed. The two devices, the Tester 144 and the DUT (Device Under Test), were both Dell PowerEdge R430 145 servers having two 2.1GHz Intel Xeon E5-2683 v4 CPUs, 384GB 2400MHz 146 DDR4 RAM and Intel 10G dual port X540 network adapters. The NICs of 147 the servers were interconnected by direct cables, and the CPU clock 148 frequecy was set to fixed 2.1 GHz on both servers. They had Debian 149 9.13 Linux operating system with 4.9.0-16-amd64 kernel. The 150 measurements were performed by siitperf [LEN2021] using the 151 "stateful" branch (latest commit Aug. 16, 2021). The DPDK version 152 was 16.11.11-1+deb9u2. The version of iptables was 1.6.0. 154 The ratio of number of connections in the connection tracking table 155 and the value of the hashsize parameter of iptables significantly 156 influences its performance. Although the default setting is 157 hashsize=nf_conntrack_max/8, we have usually set 158 hashsize=nf_conntrack_max to increase the performance of iptables, 159 which was crucial, when high number of connections were used, because 160 then the execution time of the tests was dominated by the preliminary 161 phase, when several hundereds of millions connections had to be 162 established. (In some cases, we had to use different settings due to 163 memory limitations. The tables presenting the results always contain 164 these parameters.) 166 The size of the port number pool is an important parameter of the 167 bechmarking method for stateful NATxy gateways, thus it is also given 168 for all tests. 170 2.2. Performance scale up against the number of CPU cores 172 To examine how the performance of iptables scales up with the number 173 of CPU cores, the number of active CPU cores was set to 1, 2, 4, 8, 174 16 using the "maxcpus=" kernel parameter. 176 The number of connections was always 4,000,000 using 4,000 different 177 source port numbers and 1,000 different destination port numbers. 178 Both the connection tracking table size and the hash table size was 179 set to 2^23. 181 The error of the binary search was chosen to be lower than 0.1% of 182 the expected results. The experiments were executed 10 times. 184 Besides the connection establishment rate and the throughput of 185 iptables, also the throughput of the IPv4 packet forwarding of the 186 Linux kernel was measured to provide a basis for comparison. 188 The results are presented in Figure 2. The unit for the maximum 189 connection establishment rate is 1,000 connections per second. The 190 unit for throughput is 1,000 packets per second (measured with 191 bidirectional traffic, and the number of all packets per second is 192 displayed). 194 num. CPU cores 1 2 4 8 16 195 src ports 4,000 4,000 4,000 4,000 4,000 196 dst ports 1,000 1,000 1,000 1,000 1,000 197 num. conn. 4,000,000 4,000,000 4,000,000 4,000,000 4,000,000 198 conntrack t. s. 2^23 2^23 2^23 2^23 2^23 199 hash table size 2^23 2^23 2^23 2^23 2^23 200 c.t.s/num.conn. 2.097 2.097 2.097 2.097 2.097 201 num. experiments 10 10 10 10 10 202 error 100 100 100 1,000 1,000 203 cps median 223.5 371.1 708.7 1,341 2,383 204 cps min 221.6 367.7 701.7 1,325 2,304 205 cps max 226.7 375.9 723.6 1,376 2,417 206 cps rel. scale up 1 0.830 0.793 0.750 0.666 207 throughput median 414.9 742.3 1,379 2,336 4,557 208 throughput min 413.9 740.6 1,373 2,311 4,436 209 throughput max 416.1 746.9 1,395 2,361 4,627 210 tp. rel. scale up 1 0.895 0.831 0.704 0.686 211 IPv4 packet forwarding (using the same port number ranges) 212 error 200 500 1,000 1,000 1,000 213 throughput median 910.9 1,523 3,016 5,920 11,561 214 throughput min 874.8 1,485 2,951 5,811 10,998 215 throughput max 914.3 1,534 3,037 5,940 11,627 216 tp. rel. scale up 1 0.836 0.828 0.812 0.793 217 throughput ratio (%) 45.5 48.8 45.7 39.5 39.4 219 Figure 2: Scale up of iptables against the number of CPU cores 220 (Please refer to the next figure for the explanation of the 221 abbreviations.) 223 abbreviation explanation 224 ------------ ----------- 225 num. CPU cores number of CPU cores 226 src ports size of the source port number range 227 dst ports size of the destination port number range 228 num. conn. number of connections = src ports * dst ports 229 conntrack t. s. size of the connection tracking table of the 230 DUT 231 hash table size size of the hash table of the DUT 232 c.t.s/num.conn. conntrack table size / number of connections 233 num. experiments number of experiments 234 error the difference between the upper and the lower 235 bound of the binary search when it stops 236 cps (median/min/max) maximum connection establishment rate 237 (median, minimum, maximum) 238 cps rel. scale up the relative scale up of the maximum connection 239 establishment rate against the number of CPU 240 cores 241 tp. rel. scale up the relative scale up of the throughput 242 throughput ratio (%) the ratio of the throughput of iptables and the 243 throughput of IPv4 packet forwarding 245 Figure 3: Explanation of the abbreviations for the scale up of 246 iptables against the number of CPU cores 248 Whereas the throughput of IPv4 packet forwarding scaled up from 249 0.91Mpps to 11.56Mpps showing a relative scale up of 0.793, the 250 throughput of iptables scaled up from 414.9kpps to 4,557kpps showing 251 a relative scale up of 0.686 (and the relative scale up of the 252 maximum connection establishment rate is only 0.666). On the one 253 hand, this is the price of the stateful operation. On the other 254 hand, this result is quite good compared to the scale-up results of 255 NSD (a high performance authoritative DNS server) presented in 256 Table 9 of [LEN2020], which is only 0.52. (1,454,661/177,432=8.2-fold 257 performance using 16 cores.) And DNS is not a stateful technology. 259 2.3. Performance degradation caused by the number of sessions 261 To examine how the performance of iptables degrades with the number 262 connections in the connection tracking table, the number of 263 connections was increased fourfold by doubling the size of both the 264 source port number range and the destination port number range. Both 265 the connection tracking table size and the hash table size was also 266 increased four fold. However, we reached the limits of the hardware 267 at 400,000,000 connections: we could not set the size of the hash 268 table to 2^29 but only to 2^28. The same value was used at 269 800,000,000 connections too, when the number of connections was only 270 doubled, because 1.6 billion connections would not fit into the 271 memory. 273 The error of the binary search was chosen to be lower than 0.1% of 274 the expected results. The experiments were executed 10 times (except 275 for the very long lasting measurements with 800,000,000 connections). 277 The results are presented in Figure 4. The unit for the maximum 278 connection establishment rate is 1,000,000 connections per second. 279 The unit for throughput is 1,000,000 packets per second (measured 280 with bidirectional traffic, and the number of all packets per second 281 is displayed). 283 num. conn. 1.56M 6.25M 25M 100M 400M 800M 284 src ports 2,500 5,000 10,000 20,000 40,000 40,000 285 dst ports 625 1,250 2,500 5,000 10,000 20,000 286 conntrack t. s. 2^21 2^23 2^25 2^27 2^29 2^30 287 hash table size 2^21 2^23 2^25 2^27 2^28 2^28 288 num. exp. 10 10 10 10 10 5 289 error 1,000 1,000 1,000 1,000 1,000 1,000 290 n.c./h.t.s. 0.745 0.745 0.745 0.745 1.490 2.980 291 cps median 2.406 2.279 2.278 2.237 2.013 1.405 292 cps min 2.358 2.226 2.226 2.124 1.983 1.390 293 cps max 2.505 2.315 2.317 2.290 2.050 1.440 294 throughput med. 5.326 4.369 4.510 4.516 4.244 3.689 295 throughput min 5.217 4.240 3.994 4.373 4.217 3.670 296 throughput max 5.533 4.408 4.572 4.537 4.342 3.709 298 Figure 4: Performance of iptables against the number of sessions 300 The performance of iptables shows degradation at 6.25M connections 301 compared to 1.56M connections very likely due to the exhaustion of 302 the L3 cache of the CPU of the DUT. Then the performance of iptables 303 is fearly constant up to 100M connections. A small performance 304 decrease can be observed at 400M connections due to the lower hash 305 table size. A more significant performance decrease can be observed 306 at 800M connections. It is caused by two factors: 308 * on average, about 3 connections were hashed to the same place 310 * non NUMA local memory was also used. 312 We note that the CPU has 2 NUMA nodes, cores 0, 2, ... 14 belong to 313 NUMA node 0, and cores 1, 3, ... 15 belong to NUMA node 1. The 314 maximum memory consumption with 400,000,000 connections was below 315 150GB, thus it could be stored in NUMA local memory. 317 Therefore, we have pointed out important limitations of the stateful 318 NAT44 technology: 320 * there is a performance decrease, when approaching hardware limits 322 * there is a hardware limit, beyond which the system cannot handle 323 the connections at all (e.g. 1600M connections would not fit into 324 the memory). 326 Therefore, we can conclude that, on the one hand, a well tailored 327 hashing may guarantee an excellent scale-up of stateful NAT44 328 regarding the number of connections in a wide range, however, on the 329 other hand, stateful operation has its limits resulting both in 330 performance decrease, when approaching hardware limits and also in 331 inability to handle more sessions, when reaching the memory limits. 333 2.4. Connection tear down rate 335 [I-D.lencse-bmwg-benchmarking-stateful] has defined connection tear 336 down rate measurement as an aggregate measurement, that is, N number 337 of connections are loaded into the connection tracking table of the 338 DUT and then the entire content of the connection tracking table is 339 deleted, and its deletion time is measured (T). Finally, the 340 connection tear down rate is computed as: N/T.) 342 We have observed that the deletion of an empty connection tracking 343 table of iptables my take a significant amount of time depending on 344 its size. Therefore, we made our measurements more accurate by 345 subtracting the deletion time of the empty connection tracking table 346 from that of the filled one, thus we got the time spent with the 347 deleting of the connections. 349 The same setup and parameters were used as in Section 2.3 and the 350 experiments were executed 10 times (except for the long lasting 351 measurements with 800,000,000 connections). 353 The results are presented in Figure 5. 355 num. conn. 1.56M 6.35M 25M 100M 400M 800M 356 src ports 2,500 5,000 10,000 20,000 40,000 40,000 357 dst ports 625 1,250 2,500 5,000 10,000 20,000 358 conntrack t. s. 2^21 2^23 2^25 2^27 2^29 2^30 359 hash table size 2^21 2^23 2^25 2^27 2^28 2^28 360 num. exp. 10 10 10 10 10 5 361 n.c./h.t.s. 0.745 0.745 0.745 0.745 1.490 2.980 362 full contr. del med 4.33 18.05 74.47 305.33 1,178.3 2,263.1 363 full contr. del min 4.25 17.93 72.04 299.06 1,164.0 2,259.6 364 full contr. del max 4.38 18.20 75.13 310.05 1,188.3 2,275.2 365 empty contr. del med 0.55 1.28 4.17 15.74 31.2 31.2 366 empty contr. del min 0.55 1.26 4.16 15.73 31.1 31.1 367 empty contr. del max 0.57 1.29 4.22 15.79 31.2 31.2 368 conn. deletion time 3.78 16.77 70.30 289.59 1,147.2 2,232.0 369 conn. tear d. rate 413,360 372,689 355,619 345,316 348,690 358,429 371 Figure 5: Connetion tear down rate of iptables against the number of 372 connections 374 The connection tear down performance of iptables shows significant 375 degradation at 6.25M connections compared to 1.56M connections very 376 likely due to the exhaustion of the L3 cache of the CPU of the DUT. 377 Then it shows only a minor degradation up to 100M connections. A 378 small performance increase can be observed at 400M connections due to 379 the relatively lower hash table size. A more visible performance 380 decrease can be observed at 800M connections. It is likely caused by 381 keeping the hash table size constant and doubling the number of 382 connections. The same thing that caused performance degradation of 383 the maximum connection establishment rate and throughput, made now 384 the deletion of the connections faster and thus caused an increase of 385 the connection tear down rate. 387 We note that according to the recommended settings of iptables, 8 388 connections are hashed to each place of the hash table on average, 389 but we wilfully used much smaller number (0.745 whenever it was 390 possible) to increase the maximum connection estabilishment rate and 391 thus to speed up experimenting. However, finally this choice 392 significantly slowed down our experiments due to the very low 393 connection tear down rate. 395 3. Scalability of Jool 396 3.1. Measurement Method 398 The same methodology was used as in Section 2, but now the test setup 399 in Figure 6 was followed. The same Tester and DUT devices were used 400 as before, but the operating system of the DUT was updated to Debian 401 10.11 with 4.19.0-18-amd64 kernel to meet the requirement of the 402 jool-tools package. The version of Jool was 4.1.6. (The most mature 403 version of Jool at the date of starting the measurements, Relase 404 Date: 2021-12-10.) 406 +--------------------------------------+ 407 2001:2::2 |Initiator Responder| 198.19.0.2 408 +-------------| Tester |<------------+ 409 | IPv6 address| [state table]| IPv4 address| 410 | +--------------------------------------+ | 411 | | 412 | +--------------------------------------+ | 413 | 2001:2::1 | DUT: | 198.19.0.1 | 414 +------------>| Sateful NAT64 gateway |-------------+ 415 IPv6 address| [connection tracking table] | IPv4 address 416 +--------------------------------------+ 418 Figure 6: Test setup for benchmarking stateful NAT64 gateways 420 Unlike with iptables, we did not find any way to tune the hashsize or 421 any other parameters of Jool. 423 3.2. Performance scale up against the number of CPU cores 425 The number of connections was always 1,000,000 using 2,000 different 426 source port numbers and 500 different destination port numbers. 428 The error of the binary search was chosen to be lower than 0.1% of 429 the expected results. The experiments were executed 10 times. 431 The results are presented in Figure 7. The unit for the maximum 432 connection establishment rate is 1,000 connections per second. The 433 unit for throughput is 1,000 packets per second (measured with 434 bidirectional traffic, and the number of all packets per second is 435 displayed). 437 num. CPU cores 1 2 4 8 16 438 src ports 2,000 2,000 2,000 2,000 2,000 439 dst ports 500 500 500 500 500 440 num. conn. 1,000,000 1,000,000 1,000,000 1,000,000 1,000,000 441 num. experiments 10 10 10 10 10 442 error 100 100 100 100 100 443 cps median 228.6 358.5 537.4 569.9 602.6 444 cps min 226.5 352.5 530.7 562.0 593.7 445 cps max 230.5 362.4 543 578.3 609.7 446 cps rel. scale up 1 0.784 0.588 0.312 0.165 447 throughput median 251.8 405.7 582.4 604.1 612.3 448 throughput min 249.8 402.9 573.2 587.3 599.8 449 throughput max 253.3 409.6 585.7 607.2 616.6 450 tp. rel. scale up 1 0.806 0.578 0.300 0.152 452 Figure 7: Scale up of Jool against the number of CPU cores 454 Both the maximum connection establishment rate and the throughput 455 scaled up poorly with the number of active CPU cores. The increase 456 of the performance was very low above 4 CPU cores. 458 3.3. Performance degradation caused by the number of sessions 460 To examine how the performance of Jool degrades with the number 461 connections, the number of connections was increased fourfold by 462 doubling the size of both the source port number range and the 463 destination port number range. We did not reach the limits of the 464 hardware regarding the number of connections, because unlike 465 iptables, Jool worked also with 1.6 billion connections. 467 The error of the binary search was chosen to be lower than 0.1% of 468 the expected results and the experiments were executed 10 times 469 (except for the very long lasting measurements with 800 million and 470 1.6 billion connections to save execution time). 472 The results are presented in Figure 8. The unit for the maximum 473 connection establishment rate is 1,000 connections per second. The 474 unit for throughput is 1,000 packets per second (measured with 475 bidirectional traffic, and the number of all packets per second is 476 displayed). 478 num. conn. 1.56M 6.35M 25M 100M 400M 1600M 479 src ports 2,500 5,000 10,000 20,000 40,000 40,000 480 dst ports 625 1,250 2,500 5,000 10,000 40,000 481 num. exp. 10 10 10 10 5 5 482 error 100 100 100 100 1,000 1,000 483 cps median 480.2 394.8 328.6 273.0 243.0 232.0 484 cps min 468.6 392.7 324.9 269.4 243.0 230.5 485 cps max 484.9 397.4 331.3 280.6 244.5 233.6 486 throughput med. 511.5 423.9 350.0 286.5 257.8 198.4 487 throughput min 509.2 420.3 348.2 284.2 257.8 195.3 488 throughput max 513.1 428.3 352.5 290.8 260.9 201.6 490 Figure 8: Performance of Jool against the number of sessions 492 The performance of Jool shows degradation at the entire range of the 493 number of connections. We did not analyze the root cause of the 494 degradation yet. And we are not aware of the implementation of its 495 connection tracking table. We also plan to check the memory 496 consumption of Jool, what is definitely lower that that of iptables. 498 3.4. Connection tear down rate 500 Basically, the same measurement method was used as in Section 2.4, 501 however having no parameter of Jool to tune, only a single 502 measurement series was performed to determine the deletion time of 503 the empty connection tracking table. The median, minimum and maximum 504 values of the 10 measurements were 0.46s, 0.42s and 0.50s 505 respectively. 507 The same setup and parameters were used as in Section 2.3 and the 508 experiments were executed 10 times (except for the long lasting 509 measurements with 800,000,000 connections). 511 The results are presented in Figure 9. The unit for the connection 512 tear down rate is 1,000,000 connections per second. 514 Num. conn. 1.56M 6.35M 25M 100M 400M 1600M 515 src ports 2,500 5,000 10,000 20,000 40,000 40,000 516 dst ports 625 1,250 2,500 5,000 10,000 40,000 517 num. exp. 10 10 10 10 10 5 518 full contr. del med 0.87 2.05 7.84 36.38 126.09 474.68 519 full contr. del min 0.80 2.02 7.80 36.27 125.84 473.20 520 full contr. del max 0.91 2.09 7.94 36.80 127.54 481.38 521 empty contr. del med 0.46 0.46 0.46 0.46 0.46 0.46 522 conn. deletion time 0.41 1.59 7.38 35.92 125.63 474.22 523 conn. t. d. r. (M) 3.811 3.931 3.388 2.784 3.184 3.374 524 Figure 9: Connetion tear down rate of Jool against the number of 525 connections 527 The connection tear down performance of Jool is excellent at any 528 number of connections. It is about and order of magnitude higher 529 that its connection establishment rate and than the connection tear 530 down rate of iptables. (A slight degradation can be observed at 100M 531 connections.) 533 4. Acknowledgements 535 The measurements were carried out by remotely using the resources of 536 NICT StarBED, 2-12 Asahidai, Nomi-City, Ishikawa 923-1211, Japan. 537 The author would like to thank Shuuhei Takimoto for the possibility 538 to use StarBED, as well as to Satoru Gonno and Makoto Yoshida for 539 their help and advice in StarBED usage related issues. 541 The author would like to thank Ole Troan for his comments on the 542 v6ops mailing list, while the scalalability measurements of iptables 543 were intended to be a part of [I-D.ietf-v6ops-transition-comparison]. 545 5. IANA Considerations 547 This document does not make any request to IANA. 549 6. Security Considerations 551 TBD. 553 7. References 555 7.1. Normative References 557 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 558 Requirement Levels", BCP 14, RFC 2119, 559 DOI 10.17487/RFC2119, March 1997, 560 . 562 [RFC4814] Newman, D. and T. Player, "Hash and Stuffing: Overlooked 563 Factors in Network Device Benchmarking", RFC 4814, 564 DOI 10.17487/RFC4814, March 2007, 565 . 567 [RFC8174] Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC 568 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174, 569 May 2017, . 571 [RFC8219] Georgescu, M., Pislaru, L., and G. Lencse, "Benchmarking 572 Methodology for IPv6 Transition Technologies", RFC 8219, 573 DOI 10.17487/RFC8219, August 2017, 574 . 576 7.2. Informative References 578 [I-D.ietf-v6ops-transition-comparison] 579 Lencse, G., Martinez, J. P., Howard, L., Patterson, R., 580 and I. Farrer, "Pros and Cons of IPv6 Transition 581 Technologies for IPv4aaS", Work in Progress, Internet- 582 Draft, draft-ietf-v6ops-transition-comparison-02, 3 March 583 2022, . 586 [I-D.lencse-bmwg-benchmarking-stateful] 587 Lencse, G. and K. Shima, "Benchmarking Methodology for 588 Stateful NATxy Gateways using RFC 4814 Pseudorandom Port 589 Numbers", Work in Progress, Internet-Draft, draft-lencse- 590 bmwg-benchmarking-stateful-03, 4 March 2022, 591 . 594 [LEN2019] Lencse, G. and Y. Kadobayashi, "Comprehensive Survey of 595 IPv6 Transition Technologies: A Subjective Classification 596 for Security Analysis", IEICE Transactions on 597 Communications, vol. E102-B, no.10, pp. 2021-2035., DOI: 598 10.1587/transcom.2018EBR0002, 1 October 2019, 599 . 602 [LEN2020] Lencse, G., "Benchmarking Authoritative DNS 603 Servers", IEEE Access, vol. 8. pp. 130224-130238, DOI: 604 10.1109/ACCESS.2020.3009141, July 2020, 605 . 607 [LEN2021] Lencse, G., "Design and Implementation of a Software 608 Tester for Benchmarking Stateless NAT64 Gateways", IEICE 609 Transactions on Communications, DOI: 610 10.1587/transcom.2019EBN0010, 1 February 2021, 611 . 614 Appendix A. Change Log 616 A.1. 00 618 Initial version: scale up of iptables. 620 A.2. 01 622 Added the scale up of Jool. 624 Author's Address 626 Gabor Lencse 627 Szechenyi Istvan University 628 Gyor 629 Egyetem ter 1. 630 H-9026 631 Hungary 632 Email: lencse@sze.hu