idnits 2.17.00 (12 Aug 2021) /tmp/idnits44962/draft-sparks-genarea-review-tracker-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- == There are 1 instance of lines with non-RFC2606-compliant FQDNs in the document. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (August 7, 2015) is 2472 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 0 errors (**), 0 flaws (~~), 2 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group R. Sparks 3 Internet-Draft Oracle 4 Intended status: Informational T. Kivinen 5 Expires: February 8, 2016 INSIDE Secure 6 August 7, 2015 8 Tracking Reviews of Documents 9 draft-sparks-genarea-review-tracker-03 11 Abstract 13 Several review teams ensure specific types of review are performed on 14 Internet-Drafts as they progress towards becoming RFCs. The tools 15 used by these teams to assign and track reviews would benefit from 16 tighter integration to the Datatracker. This document discusses 17 requirements for improving those tools without disrupting current 18 work flows. 20 Status of This Memo 22 This Internet-Draft is submitted in full conformance with the 23 provisions of BCP 78 and BCP 79. 25 Internet-Drafts are working documents of the Internet Engineering 26 Task Force (IETF). Note that other groups may also distribute 27 working documents as Internet-Drafts. The list of current Internet- 28 Drafts is at http://datatracker.ietf.org/drafts/current/. 30 Internet-Drafts are draft documents valid for a maximum of six months 31 and may be updated, replaced, or obsoleted by other documents at any 32 time. It is inappropriate to use Internet-Drafts as reference 33 material or to cite them other than as "work in progress." 35 This Internet-Draft will expire on February 8, 2016. 37 Copyright Notice 39 Copyright (c) 2015 IETF Trust and the persons identified as the 40 document authors. All rights reserved. 42 This document is subject to BCP 78 and the IETF Trust's Legal 43 Provisions Relating to IETF Documents 44 (http://trustee.ietf.org/license-info) in effect on the date of 45 publication of this document. Please review these documents 46 carefully, as they describe your rights and restrictions with respect 47 to this document. Code Components extracted from this document must 48 include Simplified BSD License text as described in Section 4.e of 49 the Trust Legal Provisions and are provided without warranty as 50 described in the Simplified BSD License. 52 Table of Contents 54 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 55 2. Overview of current workflows . . . . . . . . . . . . . . . . 3 56 3. Requirements . . . . . . . . . . . . . . . . . . . . . . . . 5 57 3.1. Secretariat focused . . . . . . . . . . . . . . . . . . . 5 58 3.2. Review-team Secretary focused . . . . . . . . . . . . . . 5 59 3.3. Reviewer focused . . . . . . . . . . . . . . . . . . . . 8 60 3.4. Review Requester and Consumer focused . . . . . . . . . . 11 61 3.5. Statistics focused . . . . . . . . . . . . . . . . . . . 11 62 4. Security Considerations . . . . . . . . . . . . . . . . . . . 12 63 5. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 12 64 6. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 12 65 7. Changelog . . . . . . . . . . . . . . . . . . . . . . . . . . 13 66 7.1. 03 . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 67 7.2. 02 . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 68 7.3. 01 . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 69 7.4. 00 . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 70 8. Informative References . . . . . . . . . . . . . . . . . . . 14 71 Appendix A. A starting point for Django models supporting the 72 review tool . . . . . . . . . . . . . . . . . . . . 15 73 Appendix B. Suggested features deferred for future work . . . . 17 74 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 17 76 1. Introduction 78 As Internet-Drafts are processed, reviews are requested from several 79 review teams. For example, the General Area Review Team (Gen-ART) 80 and the Security Directorate (Secdir) perform reviews of documents 81 that are in IETF Last Call. Gen-ART always performs a follow-up 82 review when the document is scheduled for an IESG telechat. Secdir 83 usually performs a follow-up review, but the Secdir secretary may 84 choose not to request that follow-up if any issues identified at Last 85 Call are addressed and there are otherwise no major changes to the 86 document. These teams also perform earlier reviews of documents on 87 demand. There are several other teams that perform similar services, 88 often focusing on specific areas of expertise. 90 The secretaries of these teams manage a pool of volunteer reviewers. 91 Documents are assigned to reviewers, taking various factors into 92 account. For instance, a reviewer will not be assigned a document 93 for which he is an author or shepherd. Reviewers are given a 94 deadline, usually driven by the end of last call or a IESG telechat 95 date. The reviewer sends each completed review to the team's mailing 96 list, and any other lists that are relevant for document being 97 reviewed. Often, a thread ensues on one or more of those lists to 98 resolve any issues found in the review. 100 The secretaries and reviewers from several teams are using a tool 101 developed and maintained by Tero Kivinen. Much of its design 102 predates the modern Datatracker. The application currently keeps its 103 own data store, and learns about documents needing review by 104 inspecting Datatracker and tools.ietf.org pages. Most of those pages 105 are easy to parse, but the last-call pages, in particular, require 106 some effort. Tighter integration with the Datatracker would simplify 107 the logic used to identify documents ready for review, make it 108 simpler for the Datatracker to associate reviews with documents, and 109 allow users to reuse their Datatracker credentials. It would also 110 make it easier to detect other potential review-triggering events, 111 such as a document entering working group last call, or when a RFC's 112 standard level is being changed without revising the RFC. Tero 113 currently believes this integration is best achieved by a new 114 implementation of the tool. This document captures requirements for 115 that reimplementation, with a focus on the workflows that the new 116 implementation must take care not to disrupt. It also discusses new 117 features, including changes suggested for the existing tool at its 118 issue tracker [art-trac]. 120 For more information about the various review teams, see the 121 following references 123 +--------------+---------------------+ 124 | Gen-ART | [Gen-ART] [RFC6385] | 125 | Secdir | [Secdir] | 126 | AppsDir | [AppsDir] | 127 | OPS-dir | [OPS-dir] | 128 | RTG-dir | [RTG-dir] | 129 | MIB Doctors | [MIBdoctors] | 130 | YANG Doctors | [YANGdoctors] | 131 +--------------+---------------------+ 133 2. Overview of current workflows 135 This section gives a high-level overview of how the review team 136 secretaries and reviewers use the existing tool. It is not intended 137 to be comprehensive documentation of how review teams operate. 138 Please see the references for those details. 140 For many teams, the team's secretary periodically (typically once a 141 week) checks the tool for documents it has identified as ready for 142 review. The tool has compiled this list from Last Call announcements 143 and IESG telechat agendas. The secretary creates a set of 144 assignments from this list into the reviewer pool, choosing the 145 reviewers in roughly a round-robin order. That order can be 146 perturbed by several factors. Reviewers have different levels of 147 availability. Some are willing to review multiple documents a month. 148 Others may only be willing to review a document every other month. 149 The assignment process takes exceptional conditions such as reviewer 150 vacations into account. Furthermore, secretaries are careful not to 151 assign a document to a reviewer that is an author, shepherd, 152 responsible WG chair, or has some other already existing association 153 with the document. The preference is to get a reviewer with a fresh 154 perspective. The secretary may discover reasons to change 155 assignments while going through the list of documents. In order to 156 not cause a reviewer to make a false start on a review, the 157 secretaries complete the full list of assignments before sending 158 notifications to anyone. This assignment process can take several 159 minutes, and it is possible for new last calls to be issued while the 160 secretary is making assignments. The secretary typically checks to 161 see if new documents are ready for review just before issuing the 162 assignments, and updates the assignments if necessary. 164 Some teams operate in more of a review-on-demand model. RTG-dir, for 165 example, primarily initiates reviews at the request of a Routing AD. 166 They may also start an early review at the request of a working group 167 chair. In either case, the reviewers are chosen manually from the 168 pool of available reviewers driven by context rather than using a 169 round-robin ordering. 171 The issued assignments are either sent to the review team's email 172 list or are emailed directly to the assigned reviewer. The 173 assignments are reflected in the tool. For those teams handling 174 different types of reviews (Last Call vs Telechat for example), the 175 secretary typically processes the documents for each type of review 176 separately, and potentially with different assignment criteria. In 177 Gen-ART, for example, the Last Call reviewer for a document will 178 almost always get the follow-up Telechat review assignment. 179 Similarly, Secdir assigns any re-reviews of a document to the same 180 reviewer. Other teams may choose to assign a different reviewer. 182 Reviewers discover their assignments through email or by looking at 183 their queue in the tool. The secretaries for some teams (such as 184 OPS-dir and RTG-dir) insulate their team members from using the tool 185 directly. These reviewers only work through the review team's email 186 list or through direct email. On teams that have the reviewers use 187 the tool directly, most reviewers only check the tool when they see 188 they have an assignment via the team's email list. A reviewer has 189 the opportunity to reject the assignment for any reason. While the 190 tool provides a way to reject assignments, reviewers typically use 191 email to coordinate rejections with the team secretary. The 192 secretary will find another volunteer for any rejected assignments. 194 The reviewer can indicate that the assignment is accepted in the tool 195 before starting the review, but this feature is rarely used. 197 The reviewer sends a completed review to the team's email list or 198 secretary, any other lists relevant to the review, and usually the 199 draft's primary email alias. For instance, many last call reviews 200 are also sent to the IETF general list. The teams typically have a 201 template format for the review. Those templates usually start with a 202 summary, describing the conclusion of the review. Typical summaries 203 are "Ready for publication" or "On the right track, but has open 204 issues". The reviewer (or in the case of teams that insulate their 205 reviewers, the secretary) uses the tool to indicate that the review 206 is complete, provides the summary, and has an opportunity to provide 207 a link to the review in the archives. Note, however, that having to 208 wait for the document to appear in the archive to know the link to 209 paste into the tool is a significant enough impedance that this link 210 is often not provided by the reviewer. The Secdir secretary manually 211 collects these links from the team's email list and adds them to the 212 tool. 214 Occasionally, a document is revised between when a review assignment 215 is made and when the reviewer starts the review. Different teams can 216 have different policies about whether the reviewer should review the 217 assigned version or the current version. 219 3. Requirements 221 3.1. Secretariat focused 223 o The Secretariat must be able to configure secretaries and 224 reviewers for review teams (by managing Role records). 226 o The Secretariat must be able to perform any secretary action on 227 behalf of a review team secretary (and thus, must be able to 228 perform any reviewer action on behalf of the reviewer). 230 3.2. Review-team Secretary focused 232 o A secretary must be able to see what documents are ready for 233 review of a given type (such as a Last Call review) 235 o A secretary must be able to assign reviews for documents that may 236 not have been automatically identified as ready for a review of a 237 given type. (In addition to being the primary assignment method 238 for teams that only initiate reviews on demand, this allows the 239 secretary to work around errors and handle special cases, 240 including early review requests.) 242 o A secretary must be able to work on and issue a set of assignments 243 as an atomic unit. No assignment should be issued until the 244 secretariat declares the set of assignments complete. 246 o The tool must support teams that have multiple secretaries. The 247 tool should warn secretaries that are simultaneously working on 248 assignments, and protect against conflcting assignments being 249 made. 251 o It must be easy for the secretary to discover that more documents 252 have become ready for review while working on an assignment set. 254 o The tool should make preparing the assignment email to the team's 255 email list easy. For instance, the tool could prepare the 256 message, give the secretary an opportunity to edit it, and handle 257 sending it to the team's email list. 259 o It must be possible for a secretary to indicate that the review 260 team will not provide a review for a document (or a given version 261 of a document). This indication should be taken into account when 262 presenting the documents that are ready for review of a given 263 type. This will also make it possible to show on a document's 264 page that no review is expected from this team. 266 o A secretary must be able to easily see who the next available 267 reviewers are, in order. 269 o A secretary must be able to edit a Reviewer's availability, both 270 in terms of frequency, not-available-until-date, and skip-next- 271 n-assignments. (See the description of these settings in the 272 Reviewer focused section.) 274 o The tool should make it easy for the secretary to see any team 275 members that have requested to review a given document when it 276 becomes available for review. 278 o The tool should make it easy for the secretary to identify that a 279 reviewer is already involved with a document. The current tool 280 allows the secretary to provide a regular expression to match 281 against the document name. If the expression matches, the 282 document is not available for assignment to this reviewer. For 283 example, Tero will not be assigned documents matching '^draft- 284 (kivinen|ietf-tcpinc)-.*$'. The tool should also take any roles, 285 such as document shepherd, that the Datatracker knows about into 286 consideration. 288 o The tool should make it easy for the secretary to see key features 289 of a document ready for assignment, such as its length, its 290 authors, the group and area it is associated with, its title and 291 abstract, its states (such as IESG or WG states) and any other 292 personnel (such as the shepherd and reviewers already assigned 293 from other teams) involved in the draft. 295 o The tool must make it easy for the secretary to detect and process 296 re-review requests on the same version of a document (such as when 297 a document has an additional last call only to deal with new IPR 298 information). 300 o Common operations to groups of documents should be easy for the 301 secretary to process as a group with a minimum amount of 302 interaction with the tool. For instance, it should be possible to 303 process all the of documents described by the immediately 304 preceding bullet with one action. Similarly, for teams that 305 assign re-reviews to the same reviewer, issuing all re-review 306 requests should be a simple action. 308 o A secretary must be able to see which reviewers have outstanding 309 assignments. 311 o The tool must make it easy for the secretary to see the result of 312 previous reviews from this team for a given document. In Secdir, 313 for example, if the request is for a revision that has only minor 314 differences, and the previous review result was "Ready", a new 315 assignment will not be made. If the given document replaces one 316 or more other prior documents, the tool must make it easy for the 317 secretary to see the results of previous reviews of the replaced 318 documents. 320 o The tool must make it easy for the secretary to see the result of 321 previous reviews from this team for all documents across 322 configurable recent periods of time (such as the last 12 months). 323 A RTG-dir secretary, for example, would use this result to aid in 324 the manual selection of the next reviewer. 326 o The tools must make it easy for the secretary to see the recent 327 performance of a reviewer while making an assignment (see 328 Section 3.5). This allows the secretary to detect overburdened or 329 unresponsive volunteers earlier in the process. 331 o A secretary must be able to configure the tool to remind them to 332 followup when actions are due. (For instance, a secretary could 333 receive email when a review is about to become overdue). 335 o A secretary must be able to assign multiple reviewers to a given 336 draft at any time. In particular, a secretary must be able to 337 assign an additional reviewer when an original reviewer indicates 338 their review is likely to be only partially complete. 340 o A secretary must be able to withdraw a review assignment. 342 o A secretary must be able to perform any reviewer action on behalf 343 of the reviewer. 345 o A secretary must be able to configure the review-team's set of 346 reviewers (by managing Role records for the team). 348 o Information about a reviewer must not be lost when a reviewer is 349 removed from a team. (Frequently, reviewers come back to teams 350 later.) 352 o A secretary must be able to delegate secretary capabilities in the 353 tool (similar to how a working group chair can assign a Delegate). 354 This allows review teams to self-manage secretary vacations. 356 3.3. Reviewer focused 358 o A reviewer must be able to indicate availability, both in 359 frequency of reviews, and as "not available until this date." The 360 current tool speaks of frequency in these terms: 362 - Assign at maximum one new review per week 364 - Assign at maximum one new review per fortnight 366 - Assign at maximum one new review per month 368 - Assign at maximum one new review per two months 370 Assign at maximum one new review per quarter 372 o Reviewers must be able to indicate hiatus periods. Each period 373 may be either "soft" or "hard". 375 - A hiatus must have a start date. It may have an end date, or 376 may be indefinite. 378 - During a hiatus, the reviewer will not be included in the 379 normal review rotation. When a provided end date is reached, 380 the reviewer will automatically be included in the rotation in 381 their usual order. 383 - During a "soft" hiatus, the reviewer must not be assigned new 384 reviews, but is expected to complete existing assignments and 385 do followup reviews. 387 - During a "hard" hiatus, the reviewer must not be assigned any 388 new reviews, and the secretary must be prompted to reassign any 389 outstanding or followup reviews. 391 o Reviewers must be able to indicate that they should be skipped the 392 next n times they would normally have received an assignment. 394 o Reviewers must be able to indicate that they are transitioning to 395 inactive, providing a date for the end of the transition period. 396 During this transition time, the reviewer must not be assigned new 397 reviews, but is expected to complete outstanding assignments and 398 followup reviews. At the end of the transition period, the 399 secretary must be prompted to reassign any outstanding or followup 400 reviews. (This allows review-team members that are taking on, 401 say, AD responsibility to transition gracefully to an inactive 402 state for the team). 404 o Both the reviewer and the secretary will be notified by email of 405 any modifications to a reviewer's availability. 407 o A reviewer must be able to easily discover new review assignments. 408 (The tool might send email directly to an assigned reviewer in 409 addition to sending the set of assignments to the team's email 410 list. The tool might also use the Django Message framework to let 411 a reviewer that's logged into the Datatracker know a new review 412 assignment has been made.) 414 o Reviewers must be able to see their current set of outstanding 415 assignments, completed assignments, rejected assignments. The 416 presentation of those sets should either be separate, or, if 417 combined, the sets should be visually distinct. 419 o A reviewer should be able to request to review a particular 420 document. The draft may be in any state - available and 421 unassigned; already assigned to another reviewer; or not yet 422 available. 424 o A reviewer must be able to reject a review assignment, optionally 425 providing the secretary with an explanation for the rejection. 426 The tool will notify the secretary of the rejection by email. 428 o A reviewer must be able to indicate that they have accepted and 429 are working on an assignment 431 o A reviewer must be able to indicate that a review is only 432 partially completed, asking the secretary to assign an additional 433 reviewer. The tool will notify the secretary of this condition by 434 email. 436 o It should be possible for a reviewer to reject or accept a review 437 either by using the tool's web interface, or by replying to the 438 review assignment email. 440 o It must easy for a reviewer to see when each assigned review is 441 due. 443 o A reviewer must be able to configure the tool to remind them when 444 actions are due. (For instance, a reviewer could receive email 445 when a review is about to become overdue). 447 o A reviewer must be able to indicate that a review is complete, 448 capturing where the review is in the archives, and the high-level 449 review-result summary 451 o It must be possible for a reviewer to clearly indicate which 452 version of a document was reviewed. Documents are sometimes 453 revised between when a review was assigned and when it is due. 454 The tool should note the current version of the document, and 455 highlight when the review is not for the current version. 457 o It must be easy for a reviewer to submit a completed review. 459 - The current workflow, where the reviewer sends email to the 460 team's email list (possibly copying other lists) and then 461 indicates where to find that review must continue to be 462 supported. The tool should make it easier to capture the link 463 to review in the team's email list archives (perhaps by 464 suggesting links based on a search into the archives). 466 - The tool should allow the reviewer to enter the review into the 467 tool via a web form (either as directly-provided text, or 468 through a file-upload mechanism). The tool will ensure the 469 review is posted to the appropriate lists, and will construct 470 the links to those posts in the archives. 472 - The tool could also allow the reviewer to submit the review to 473 the tool by email (perhaps by replying to the assignment). The 474 tool would then ensure the review is posted to the appropriate 475 lists. 477 3.4. Review Requester and Consumer focused 479 o It should be easy for an AD or group chair to request any type of 480 review, but particularly an early review, from a review team. 482 o It should be possible for that person to withdraw a review 483 request. 485 o It must be easy to find all reviews of a document when looking at 486 the document's main page in the Datatracker. The reference to the 487 review must make it easy to see any responses to the review on the 488 email lists it was sent to. If a document "replaces" one or more 489 other documents, reviews of the replaced documents should be 490 included in the results. 492 o It must be easy to find all reviews of a document when looking at 493 search result pages, and other lists of documents such as the 494 documents on a IESG telechat agenda. 496 3.5. Statistics focused 498 o It must be easy to see the following, across all teams, a given 499 team, or a given reviewer, and independently across all time, or 500 across configurable recent periods of time: 502 - How many reviews have been completed 504 - How many reviews are in progress 506 - How many in progress reviews are late 508 - How many completed reviews were late 510 - How many reviews were not completed at all 512 - Average time to complete reviews (from assignment to 513 completion) 515 o It must be easy to see, for all teams, for a given team, or for a 516 given reviewer, across all time, or across configurable recent 517 periods: 519 - Total counts of reviews in each review state (done, rejected, 520 etc.) 522 - Total counts of completed reviews by result (ready, ready with 523 nits, etc.) 525 o The above statistics should also be calculated reflecting the size 526 of the documents being reviewed (such as using the number of pages 527 or words in the documents). 529 o Where applicable, statistics should take reviewer hiatus periods 530 into account. 532 o Access to the above statistics must be easy to configure. Access 533 will be initially limited as follows 535 - The secretariat and ADs can see any statistic. 537 - A team secretary can see any statistics for that team. 539 - A reviewer can see any team aggregate statistics, or their own 540 reviewer-specific statistics. 542 o Where possible, the above statistics should be visible as a time- 543 series graph. 545 o The implementation should anticipate future enhancements that 546 would allow ADs to indicate their position was informed by a given 547 review. Such enhancements would allow reporting correlations 548 between reviews and documents that receive one or more discusses. 549 However, implementing these enhancements is not part of the 550 current project. 552 4. Security Considerations 554 This document discusses requirements for tools that assist review 555 teams. These requirements do not affect the security of the Internet 556 in any significant fashion. The tools themselves have authentication 557 and authorization considerations (team secretaries will be able to do 558 different things than reviewers). 560 5. IANA Considerations 562 This document has no actions for IANA. 564 6. Acknowledgments 566 Tero Kivinen and Henrik Levkowetz were instrumental in forming this 567 set of requirements and in developing the initial Django models in 568 the appendix. 570 The following people provided reviews of this draft: David Black, 571 Deborah Brungard, Brian Carpenter, Elwyn Davies, Stephen Farrell, 572 Joel Halpern, Jonathan Hardwick, Russ Housley, Barry Leiba, Jean 573 Mahoney, Randy Presuhn, Gunter Van De Velde, and Martin Vigoureux. 575 (If we have missed a reviewer here, or failed to capture or respond 576 to a review comment, please retransmit and accept our apologies.) 578 7. Changelog 580 7.1. 03 582 o Fixed frequent typo haitus -> hiatus 584 7.2. 02 586 o Clarified typical telechat review workflow for secdir in the 587 introduction. 589 o Added MIB and Yang doctor teams. 591 o Captured requirement to be able to request a review assignement 592 for a given document. 594 o Capture a requirement to gracefully handle teams that have 595 multiple secretaries. 597 o Captured that statistics should reflect document size in addition 598 to document count. 600 o Noted that assignement rejection is currently coordinated by 601 email. 603 o Noted that teams often copy the draft's primary email alias on 604 reviews. 606 o Noted that haitus periods should be carefully considered when 607 building statistics. 609 o Highlighted a couple of scenarios where the tools needs to send 610 email to the secretary. 612 7.3. 01 614 o Captured that OPS-dir reviewers do not use the tool directly - 615 only the secretaries do. 617 o Secretaries must be able to see who has outstanding reviews. 619 o Reviewers must be able to see when assignments are due. 621 o Captured that RTG-dir reviews documents primarily only at the 622 specific request of Routing ADs. 624 o Captured that RTG-dir QA-reviews can be requested by chairs. 626 o Captured that RTG-dir assignments are made by unicast, rather than 627 through a directorate list. 629 o Added a requirement to be able to say "this team isn't going to 630 review this (version of this) document" 632 o Captured that the secretariat should be able to act on behalf of 633 secretaries 635 o Noted that reviewer information must not be lost as reviewers 636 leave teams 638 o Captured the notion of a hiatus 640 o Captured the notion of transition to inactive 642 o Secretaries that want reminders should be able to configure the 643 tool to prompt them as due dates arrive 645 o Reviewers that want reminders should be able to configure the tool 646 to prompt them as due dates arrive 648 o Access to statistics for individuals should initially be limited 649 until we have some experience with them. 651 o Clarified the scope of trying to correlate reviews to discusses, 652 recognizing this is mostly future work. 654 o Captured that secretaries and reviewers need to be able to handle 655 the edge cases when a review needs to be reassigned or an 656 additional reviewer needs to be assigned after the initial 657 assignee has started. 659 7.4. 00 661 o Initial Version 663 8. Informative References 665 [AppsDir] "Applications Directorate", Work in Progress , April 2015, 666 . 669 [art-trac] 670 "Area Review Team Tool - Active Tickets", Work in Progress 671 , April 2015, . 674 [Gen-ART] "General Area Review Team Guidelines", Work in Progress , 675 April 2015, 676 . 678 [MIBdoctors] 679 "MIB Doctors", Work in Progress , April 2015, 680 . 682 [OPS-dir] "OPS Directorate", Work in Progress , April 2015, 683 . 686 [RFC6385] Barnes, M., Doria, A., Alvestrand, H., and B. Carpenter, 687 "General Area Review Team (Gen-ART) Experiences", RFC 688 6385, DOI 10.17487/RFC6385, October 2011, 689 . 691 [RTG-dir] "Routing Directorate", Work in Progress , April 2015, 692 . 694 [Secdir] "Security Directorate", Work in Progress , April 2015, 695 . 698 [YANGdoctors] 699 "YANG Doctors", Work in Progress , April 2015, 700 . 702 Appendix A. A starting point for Django models supporting the review 703 tool 705 from django.db import models 706 from ietf.doc.models import Document 707 from ietf.person.models import Email 708 from ietf.group.models import Group, Role 710 from ietf.name.models import NameModel 712 class ReviewRequestStateName(NameModel): 713 """ Requested, Accepted, Rejected, Withdrawn, Overtaken By Events, 714 No Response , Completed """ 716 class ReviewTypeName(NameModel): 718 """ Early Review, Last Call, Telechat """ 720 class ReviewResultName(NameModel): 721 """Almost ready, Has issues, Has nits, Not Ready, 722 On the right track, Ready, Ready with issues, 723 Ready with nits, Serious Issues""" 725 class Reviewer(models.Model): 726 """ 727 These records associate reviewers with review team, and keeps track 728 of admin data associated with the reviewer in the particular team. 729 There will be one record for each combination of reviewer and team. 730 """ 731 role = models.ForeignKey(Role) 732 frequency = models.IntegerField(help_text= 733 "Can review every N days") 734 available = models.DateTimeField(blank=True,null=True, help_text= 735 "When will this reviewer be available again") 736 filter_re = models.CharField(blank=True) 737 skip_next = models.IntegerField(help_text= 738 "Skip the next N review assignments") 740 class ReviewResultSet(models.Model): 741 """ 742 This table provides a way to point out a set of ReviewResultName 743 entries which are valid for a given team, in order to be able to 744 limit the result choices that can be set for a given review, as a 745 function of which team it is related to. 746 """ 747 team = models.ForeignKey(Group) 748 valid = models.ManyToManyField(ReviewResultName) 750 class ReviewRequest(models.Model): 751 """ 752 There should be one ReviewRequest entered for each combination of 753 document, rev, and reviewer. 754 """ 755 # Fields filled in on the initial record creation: 756 time = models.DateTimeField(auto_now_add=True) 757 type = models.ReviewTypeName() 758 doc = models.ForeignKey(Document, 759 related_name='review_request_set') 760 team = models.ForeignKey(Group) 761 deadline = models.DateTimeField() 762 requested_rev = models.CharField(verbose_name="requested_revision", 763 max_length=16, blank=True) 764 state = models.ForeignKey(ReviewRequestStateName) 765 # Fields filled in as reviewer is assigned, and as the review 766 # is uploaded 767 reviewer = models.ForeignKey(Reviewer, null=True, blank=True) 768 review = models.OneToOneField(Document, null=True, 769 blank=True) 770 reviewed_rev = models.CharField(verbose_name="reviewed_revision", 771 max_length=16, blank=True) 772 result = models.ForeignKey(ReviewResultName) 774 Appendix B. Suggested features deferred for future work 776 Brian Carpenter suggested a set of author/editor-focused requirements 777 that were deferred for another iteration of improvement. These 778 include providing a way for the editors to acknowledge receipt of the 779 review, potentially tracking the email conversation between the 780 reviewer and document editor, and indicating which review topics the 781 editor believes a new revision addresses. 783 Authors' Addresses 785 Robert Sparks 786 Oracle 787 7460 Warren Parkway 788 Suite 300 789 Frisco, Texas 75034 790 USA 792 Email: rjsparks@nostrum.com 794 Tero Kivinen 795 INSIDE Secure 796 Eerikinkatu 28 797 HELSINKI FI-00180 798 FI 800 Email: kivinen@iki.fi