idnits 2.17.00 (12 Aug 2021) /tmp/idnits14505/draft-white-network-benchmark-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1.a on line 18. -- Found old boilerplate from RFC 3978, Section 5.5 on line 345. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 322. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 329. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 335. ** The document seems to lack an RFC 3978 Section 5.1 IPR Disclosure Acknowledgement. ** This document has an original RFC 3978 Section 5.4 Copyright Line, instead of the newer IETF Trust Copyright according to RFC 4748. ** This document has an original RFC 3978 Section 5.5 Disclaimer, instead of the newer disclaimer which includes the IETF Trust according to RFC 4748. ** The document uses RFC 3667 boilerplate or RFC 3978-like boilerplate instead of verbatim RFC 3978 boilerplate. After 6 May 2005, submission of drafts without verbatim RFC 3978 boilerplate is not accepted. The following non-3978 patterns matched text found in the document. That text should be removed or replaced: This document is an Internet-Draft and is subject to all provisions of Section 3 of RFC 3667. By submitting this Internet-Draft, each author represents that any applicable patent or other IPR claims of which he or she is aware have been or will be disclosed, and any of which he or she becomes aware will be disclosed, in accordance with Section 6 of BCP 79. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == No 'Intended status' indicated for this document; assuming Proposed Standard == The page length should not exceed 58 lines per page, but there was 8 longer pages, the longest (page 1) being 66 lines == It seems as if not all pages are separated by form feeds - found 0 form feeds but 8 pages Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an Introduction section. ** The document seems to lack a Security Considerations section. ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the RFC 3978 Section 5.4 Copyright Line does not match the current year -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (October 4, 2004) is 6431 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) -- Missing reference section? 'OSPF-BENCH' on line 293 looks like a reference Summary: 8 errors (**), 0 flaws (~~), 4 warnings (==), 8 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Individual R. White 2 Internet-Draft R. Adams 3 Expires: April 4, 2005 Cisco Systems 4 V. Manral 5 SiNett Corp. 6 October 4, 2004 8 Considerations in Benchmarking Routing Protocol Network Convergence 9 draft-white-network-benchmark-01 11 Status of this Memo 13 This document is an Internet-Draft and is subject to all provisions 14 of section 3 of RFC 3667. By submitting this Internet-Draft, each 15 author represents that any applicable patent or other IPR claims of 16 which he or she is aware have been or will be disclosed, and any of 17 which he or she become aware will be disclosed, in accordance with 18 RFC 3668. 20 Internet-Drafts are working documents of the Internet Engineering 21 Task Force (IETF), its areas, and its working groups. Note that 22 other groups may also distribute working documents as 23 Internet-Drafts. 25 Internet-Drafts are draft documents valid for a maximum of six months 26 and may be updated, replaced, or obsoleted by other documents at any 27 time. It is inappropriate to use Internet-Drafts as reference 28 material or to cite them other than as "work in progress." 30 The list of current Internet-Drafts can be accessed at 31 http://www.ietf.org/ietf/1id-abstracts.txt. 33 The list of Internet-Draft Shadow Directories can be accessed at 34 http://www.ietf.org/shadow.html. 36 This Internet-Draft will expire on April 4, 2005. 38 Copyright Notice 40 Copyright (C) The Internet Society (2004). 42 Abstract 44 This document attempts to discuss some of the definitions required to 45 undertake the specifications of benchmarks measuring routing 46 protocols convergence within a network, and also to discuss some of 47 the possible ways to benchmark a routing protocol performance within 48 a network, and some of the implications of those benchmarks. The 49 definition of convergence is discussed first, then polling network 50 devices. Several tests which are commonly used to measure network 51 convergence are examined. 53 This draft does not attempt to define what techniques should be used 54 to benchmark network convergence, but only to provide considerations 55 testers should consider when attempting to measure network 56 convergence using various methods. 58 1. Motivation 60 As the ability to benchmark components within a network appears to be 61 coming under greater scrutiny, and specifications are being written 62 to standardize ways to measure the performance of individual 63 components within given frameworks, the next level of benchmarking 64 has not been approached, that of measuring the performance of 65 networks. But what is meant when we say the performance of a 66 network, from the perspective of routing protocols? Various tests 67 have been used in the past to measure the convergence of a network, 68 some of which actually measure completely different things than 69 others. 71 It's important to attempt to examine the measurement of network 72 convergence in a way that exposes these differences, and helps 73 vendors, end users, and those in the research community have some 74 common ground when discussing network convergence. 76 2. A Problem of Definitions 78 As we examine the issues and concepts surrounding the measurement of 79 network performance in terms of convergence, we find that most of the 80 basic problems we face surround defining the terms in use. For 81 instance, what is convergence, exactly? What is a network? In the 82 following sections, we discuss each of these concepts, and attempt to 83 address each one. 85 2.1 Networks 87 In its most nominal form, a network is composed of a group of devices 88 interconnected in some way, which send data over these 89 interconnections for various purposes. But, when we discuss the 90 concept of routing protocol convergence within a network, the 91 definition needs to be more precise. For instance, since hosts do 92 not, generally, participate in routing, should they be considered a 93 part of the network when benchmarking the performance of a routing 94 protocol? The obvious answer appears to be a resounding no, but, in 95 some possible tests types, hosts, acting as testing devices, play a 96 large part in the test itself. 98 When considering tests in which hosts participate as traffic or route 99 generators, or other testing devices, we must consider the impact 100 these hosts have on the test results, although we may not consider 101 them a part of the network. 103 2.2 Convergence 105 Convergence is probably one of the hardest words in networking to 106 define. Just about everyone who has worked on networks for a period 107 of time knows what it means, but no-one can explain it sufficiently 108 to someone who doesn't understand how a network works for it to be 109 understood. In fact, this is because there are several different 110 meanings attributed to convergence, and which meaning is intended 111 depends on the context in which the word is set. Convergence can 112 mean: 113 o The time at which all the routing protocol processes running on 114 devices which participate in routing in the network agree on the 115 best path to each reachable destination in the network. 116 o The time at which the best path to each reachable destination in 117 the network has been loaded into some local table which may then 118 be used to forward packets (the routing information base, or RIB). 119 o The time at which each router in the network has built the tables 120 necessary to actually forward packets through the net- work, so 121 that a packet transmitted from one part of the network would 122 actually reach any given reachable destination within the network. 124 For instance, on a Cisco router, show ip ospf stats would allow the 125 tester to see the time of the last completed SPF, show ip route would 126 allow the tester to see what routes are installed in the RIB, and 127 show ip cef would allow the tester to see the forwarding information 128 which has been built from the RIB. Each test designed to measure the 129 performance of routing protocols within a network must determine 130 which type of convergence is being measured, if that measurement is 131 acceptable to the information being gathered, and which test will 132 actually measure the desired type of convergence. 134 2.3 Polling Devices in a Network 136 One common way to measure network convergence is to poll the devices 137 in the network, using some command supplied within the routing 138 software, to determine when particular events have occurred, or par- 139 ticular pieces of information have reached all the routers in the 140 network. Polling eliminates the need for the clock of each device 141 within the network to be synchronized for the test to have meaningful 142 results. However, there are some issues with the rate of polling 143 dev- ices within the network which need to be addressed in any test 144 which polls devices for this information; the first is the rate at 145 which polling takes place. 147 If, in a test, you are attempting to measure some parameter to within 148 one second of its occurrence, then you would need to poll at a rate 149 much higher than once per second. 151 test starts here 152 | 153 | event occurs here 154 | | 155 v v 156 -+----------+----------+----------+--- 157 ^ ^ ^ ^ 158 | | | | 159 0 seconds 1 second 2 seconds 3 seconds 161 For instance, in this time line, suppose a polling event is set up to 162 take place every second. An event is started just after some polling 163 event takes place, but the polling process doesn't recognize the test 164 as starting until the 1 second poll. An event occurs just before the 165 2 second poll, and the polling process detects this at the 2 second 166 poll. The polling process would indicate that from the time the 167 event started until the time the event has finished, one second has 168 elapsed. In reality, closer to two seconds has elapsed. 170 The interval of the polling process can be reduced until the 171 measurement is felt to be accurate, but it should be at least half of 172 the desired accuracy. Common practice actually shows that it should 173 be about one tenth of the desired accuracy. 175 A second consideration when polling for network events is the 176 performance of the device running the polling process. If the 177 process cannot poll each device at the scheduled interval, or the 178 polling is "jittered," the time between each actual poll varies by 179 some amount, the accuracy of the tests will be called into question. 180 The amount of jitter introduced by the polling device, and the rate 181 at which the device can effectively poll, should be measured in some 182 way, and this measurement should be taken into account when designing 183 tests which rely on polling. 185 Finally, when polling devices to determine when a network event 186 occurs, issues with serialization must be considered. Most devices 187 used for polling will not be able to poll several devices within the 188 network at once, and will thus serialize the polling of devices. 190 p1 p3 p5 p7 p9 191 | p2| p4| p6| p8| p10 192 | | | | | | | | | | 193 v v v v v v v v v v 194 -+----------+----------+----------+--- 195 ^ ^ ^ ^ 196 | | | | 197 0 seconds 1 second 2 seconds 3 seconds 199 Suppose, for instance, a single device is polling ten devices in the 200 network. If it can poll five devices per second, it will take a full 201 two seconds for it to detect any event on all ten devices, giving an 202 effective accuracy of about four seconds. The amount of time 203 required for a polling device to serialize through all the devices it 204 is polling needs to be considered when polling a very large number of 205 devices. 207 2.4 Passing Traffic Through the Network to Determine Convergence 209 One of the most widely used tests for determining network convergence 210 is starting some traffic stream at one end of a network, disrupting 211 or completing the network, and determining how long the traffic 212 stream is either not delivered, or takes to be delivered. For 213 instance: 215 Source----R1----R2----Sink 217 A traffic stream is generated on Source, and the link between R1 and 218 R2 is connected in some way. The time between the connection of this 219 link and the arrival of the traffic at the Sink is measured as 220 network convergence. This type of test is extremely useful in 221 testing real the response of a network to changing conditions. There 222 are some considerations which should be examined when using this sort 223 of test, or examining the results of this sort of test 225 2.4.1 The Various Elements of Performance Cannot Be Separated 227 Using this sort of testing, there is no way to separate the 228 performance of a routing protocol from the performance the 229 interaction between the routing protocol and the forwarding engine, 230 nor from the performance of the forwarding engine itself. In many 231 tests, this is acceptable, since these are all elements of the 232 network in total, but if specific elements of routing protocol 233 performance are being measured, such tests can be problematic when 234 attempting to analyze the results. 236 2.4.2 The Total Convergence of the Network May Not Be Measured 238 If you have the following topology: 240 Source-----R1----R2-----Sink 241 | | 242 R3 R4 243 | | 244 R5----R6 246 Suppose a traffic stream is sourced from Source, and then all the 247 devices in the network are brought up (R1 through R6). The time from 248 the device startup to the traffic stream reaching the sink is 249 measured as network convergence. 251 As soon as the path Source, R1, R2, Sink converges, the Sink will 252 begin receiving traffic, and the network will be considered to be 253 converged by the test. However, without polling the remaining 254 routers, R3 through R6, there is no way to know if those routers have 255 also converged on the best path to the Sink and the Source. While 256 this example may be considered extreme, there are many complex 257 topologies where: 258 o The path chosen by the traffic stream may not be the path 259 expected. 260 o The path chosen by the traffic stream may switch during network 261 convergence, with the stream taking some secondary path at first, 262 and the successively better paths converging over the life of the 263 test. 264 o The path chosen by the traffic stream switches so quickly that no 265 traffic is lost, while the routing protocols still take some time 266 to converge. 268 Tests which rely on traffic passing through the network to determine 269 network convergence times should thoroughly examine the way in which 270 the test topology converges, and examine the consistency of that 271 convergence, with enough test runs to get a good feel for the range 272 of possible results. Examining the same test sequence with slight 273 changes in the network topology may help to provide an understanding 274 of how the network under test converges, and also may help to provide 275 more insight into the factors impacting convergence in the test 276 network. 278 It's also possible that if the test network does not converge 279 completely for some time after the test traffic successfully passes 280 through the topology, the continuing convergence could impact the 281 results of a second test run, if the test runs are placed too closely 282 together. If a first test is run, and a second test is started 283 immediately on traffic making it through the test topology, the 284 results of the second test may be skewed by convergence which is 285 still taking place from the first test run. 287 These are important considerations which should be noted when 288 examining or performing tests which rely on the presence of a data 289 stream within a routing system to measure convergence. 291 3 Informative References 293 [OSPF-BENCH] 294 Manral, V., White, R. and A. Shaikh, "Benchmarking Basic 295 OSPF Single Router Control Plane Convergence", 296 draft-ietf-bmwg-ospfconv-intraarea-10 (work in progress), 297 July 2004. 299 Authors' Addresses 301 Russ White 302 Cisco Systems 303 riw@cisco.com 305 Robert Adams 306 Cisco Systems 307 robeadam@cisco.com 309 Vishwas Manral 310 SiNett Corp. 311 vishwas@sinett.com 313 Intellectual Property Statement 315 The IETF takes no position regarding the validity or scope of any 316 Intellectual Property Rights or other rights that might be claimed to 317 pertain to the implementation or use of the technology described in 318 this document or the extent to which any license under such rights 319 might or might not be available; nor does it represent that it has 320 made any independent effort to identify any such rights. Information 321 on the procedures with respect to rights in RFC documents can be 322 found in BCP 78 and BCP 79. 324 Copies of IPR disclosures made to the IETF Secretariat and any 325 assurances of licenses to be made available, or the result of an 326 attempt made to obtain a general license or permission for the use of 327 such proprietary rights by implementers or users of this 328 specification can be obtained from the IETF on-line IPR repository at 329 http://www.ietf.org/ipr. 331 The IETF invites any interested party to bring to its attention any 332 copyrights, patents or patent applications, or other proprietary 333 rights that may cover technology that may be required to implement 334 this standard. Please address the information to the IETF at 335 ietf-ipr@ietf.org. 337 Disclaimer of Validity 339 This document and the information contained herein are provided on an 340 "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS 341 OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY AND THE INTERNET 342 ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED, 343 INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE 344 INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 345 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 347 Copyright Statement 349 Copyright (C) The Internet Society (2004). This document is subject 350 to the rights, licenses and restrictions contained in BCP 78, and 351 except as set forth therein, the authors retain all their rights. 353 Acknowledgment 355 Funding for the RFC Editor function is currently provided by the 356 Internet Society.