[Docs] [txt|pdf] [Tracker] [Email] [Diff1] [Diff2] [Nits]

Versions: 00 01 02 03 04 RFC 6385

Network Working Group                                          M. Barnes
Internet-Draft                                                   Polycom
Intended status: Informational                                  A. Doria
Expires: January 13, 2011                 Lulea University of Technology
                                                           H. Alvestrand
                                                                  Google
                                                            B. Carpenter
                                                  University of Auckland
                                                           July 12, 2010


             General Area Review Team (GenART) Experiences
                    draft-doria-genart-experience-03

Abstract

   The General Area Review team has been doing Reviews of Internet
   Drafts since 2004.  This draft discusses the experience and the
   lessons learned over the past 6+ years of this process.  The review
   team initially reviewed the drafts before each of the IESG telechats.
   Beginning in late 2005, review team members have been assigned to
   review documents during IETF Last Call, unless no IETF Last Call is
   necessary for the document.  The same reviewer then reviews any
   updates when the document is placed on an IESG telechat agenda.

Status of this Memo

   This Internet-Draft is submitted in full conformance with the
   provisions of BCP 78 and BCP 79.

   Internet-Drafts are working documents of the Internet Engineering
   Task Force (IETF).  Note that other groups may also distribute
   working documents as Internet-Drafts.  The list of current Internet-
   Drafts is at http://datatracker.ietf.org/drafts/current/.

   Internet-Drafts are draft documents valid for a maximum of six months
   and may be updated, replaced, or obsoleted by other documents at any
   time.  It is inappropriate to use Internet-Drafts as reference
   material or to cite them other than as "work in progress."

   This Internet-Draft will expire on January 13, 2011.

Copyright Notice

   Copyright (c) 2010 IETF Trust and the persons identified as the
   document authors.  All rights reserved.

   This document is subject to BCP 78 and the IETF Trust's Legal



Barnes, et al.          Expires January 13, 2011                [Page 1]

Internet-Draft                   GenART                        July 2010


   Provisions Relating to IETF Documents
   (http://trustee.ietf.org/license-info) in effect on the date of
   publication of this document.  Please review these documents
   carefully, as they describe your rights and restrictions with respect
   to this document.  Code Components extracted from this document must
   include Simplified BSD License text as described in Section 4.e of
   the Trust Legal Provisions and are provided without warranty as
   described in the Simplified BSD License.


Table of Contents

   1.  Introduction . . . . . . . . . . . . . . . . . . . . . . . . .  3
   2.  Who are the GenART review team members?  . . . . . . . . . . .  3
   3.  Goals of GenART  . . . . . . . . . . . . . . . . . . . . . . .  3
   4.  GenART Reviews . . . . . . . . . . . . . . . . . . . . . . . .  4
     4.1.  IETF LC Review Process . . . . . . . . . . . . . . . . . .  4
     4.2.  IESG Telechat Review Process . . . . . . . . . . . . . . .  5
     4.3.  Form of Review . . . . . . . . . . . . . . . . . . . . . .  5
     4.4.  GenART Process Overview  . . . . . . . . . . . . . . . . .  7
   5.  Secretarial Process  . . . . . . . . . . . . . . . . . . . . .  9
     5.1.  Maintaining review spreadsheet . . . . . . . . . . . . . .  9
     5.2.  Last Call Assignment procedure . . . . . . . . . . . . . . 11
     5.3.  Telechat Assignment procedure  . . . . . . . . . . . . . . 12
     5.4.  Capturing reviews  . . . . . . . . . . . . . . . . . . . . 13
   6.  Results  . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
   7.  Impressions  . . . . . . . . . . . . . . . . . . . . . . . . . 14
     7.1.  Reviewers' Impressions . . . . . . . . . . . . . . . . . . 14
     7.2.  General Area Directors' Impressions  . . . . . . . . . . . 15
     7.3.  GenART Secretaries' Impressions  . . . . . . . . . . . . . 16
   8.  Needed Improvements  . . . . . . . . . . . . . . . . . . . . . 17
   9.  Applicability  . . . . . . . . . . . . . . . . . . . . . . . . 18
   10. Security considerations  . . . . . . . . . . . . . . . . . . . 18
   11. IANA considerations  . . . . . . . . . . . . . . . . . . . . . 19
   12. Acknowledgments  . . . . . . . . . . . . . . . . . . . . . . . 19
   13. Changes since Last Version . . . . . . . . . . . . . . . . . . 19
   14. Informative References . . . . . . . . . . . . . . . . . . . . 20
   Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 20













Barnes, et al.          Expires January 13, 2011                [Page 2]

Internet-Draft                   GenART                        July 2010


1.  Introduction

   The General Area Review team was created personally by the General
   Area Director in 2004.  The review team has been retained by
   subsequent General Area Directors.  It has no official role in the
   IETF standards process, except as a set of individuals entitled, like
   everyone, to comment on Internet-Drafts.  Its Secretary, and the team
   of volunteer reviewers, serve at the invitation of the General AD.

   Discussion of this document is intended to take place on the IETF
   mailing list <mailto: ietf@ietf.org> in the absence of a better home.
   In addition, comments may be specifically sent to the gen-art mailing
   list: <mailto: gen-art@ietf.org>.


2.  Who are the GenART review team members?

   The reviewers are typically individuals that have a fair amount of
   experience within various IETF Working Groups (WGs), have authored WG
   drafts and RFCs, and are often considered to be subject matter
   experts (SMEs) in their particular areas of work.  The current review
   team is comprised of such technical experts including several WG
   chairs and past and current IAB members.  Several past and current
   ADs have served as reviewers.  Two past General ADs have also served
   as reviewers, with one currently serving.

   Members of the review team sometimes excuse themselves from the team
   for various reasons, typically due to "day" job demands.  However,
   they often rejoin (for periods of time) as their schedules allow.
   Also, some reviewers remain on the team, while their review workload
   is decreased by assigning them just one document (at Last Call time)
   to review each month.  Section 12 provides a list of currently active
   reviewers, along with those who have served on the review team in the
   past.


3.  Goals of GenART

   The original and continuing goal of the GenART team was, and is, to
   offload some of the burden from the General Area AD of IESG reviews.
   The load for the bi-weekly IESG reviews is often quite large;
   occasionally there are more than 20 drafts scheduled for discussion
   in a single telechat.  Thus, ADs also have less than a week's notice
   for many of the documents on the telechat agenda.

   GenART was based on a model that had proved productive in the OPS
   Directorate: Quick review close to telechat time, to advise the AD on
   issues that remain serious.  By having a trusted group of reviewers



Barnes, et al.          Expires January 13, 2011                [Page 3]

Internet-Draft                   GenART                        July 2010


   read and evaluate the drafts, the General Area AD would be able to
   concentrate on those drafts where there was a concern expressed by
   the reviewer.  The reviewers are expected to provide feedback based
   on a whole set of criteria including the criteria summarized in
   Section 4.3.  The overall objective is to ensure that the documents
   are well structured, can be easily understood, at least at a high
   level, and provides a reasonable basis for implementation (for
   standard's track documents).

   While other area (and WG) directorates/review teams existed prior to
   GenART and more have been established since GenART, the roles of each
   are fairly distinct.  Thus, there is little overlap between the goals
   and review criteria for the various review teams.  It is also very
   valuable for these other review teams to operate independently.  For
   example, when both GenART reviews and sec-dir reviews raise the same
   sorts of concerns, it's a clear red flag that the document needs more
   work before progressing.  In addition, due to the typical
   thoroughness (and objectiveness) of the various review teams'
   reviews, ADs/PROTO shepherds are often able to work with the
   editors/WG (and vice versa depending upon area and WG structure) to
   improve the overall quality of the final document.

   Statistics from the GenART reviews over the past 6 + years show a
   trend of increased quality and readiness for progression of documents
   by the time they are placed on the telechat agenda.  Additional
   statistics are discussed in Section 6.


4.  GenART Reviews

4.1.  IETF LC Review Process

   While the original process was meant only for reviews just before the
   IESG telechat, it was decided to include IETF Last Call reviews in
   early 2005.  This initially seemed to be an overloading of the
   process and presented some initial difficulties.  However, over time
   it has proven to be quite effective.  Assigning the documents at IETF
   LC time typically gives a reviewer more time to review a document.
   And, in some cases, the IETF LC version is the one to appear on the
   telechat.  Thus, by the time documents are added to the telechat
   agenda, a majority (typically at least 70%) have already been
   reviewed.  For those documents that have been up-versioned, the
   amount of time dedicated to re-review depends upon the review summary
   for the IETF LC review.

   The assignments at IETF LC time evolved to minimize the gap between
   LC announcements and assignment time, with the secretary doing LC
   assignments every Thursday night.  This typically allows the reviewer



Barnes, et al.          Expires January 13, 2011                [Page 4]

Internet-Draft                   GenART                        July 2010


   at least one week and sometimes two to three weeks to complete the
   review.  The reviews are obviously most helpful when done on or
   before the end of IETF LC.

   The Last Call assignments are done on a fairly strict round robin
   basis to ensure a fair workload amongst all the reviewers.  Reviewers
   that are unavailable (vacations, etc.) during the review period
   timeframe obviously are excluded from that round of assignments, but
   remain in the same queue position for the next round.  The order is
   occasionally modified to avoid assigning an editor/author or WG chair
   their own documents.  A reviewer may also NACK an assignment if they
   feel they may have some bias (although corporate affiliations are not
   considered to be sources of bias) or they don't feel they can review
   the document in a timely manner.

   The assignment process is completely manual, although a spreadsheet,
   maintained using Open Office, tremendously facilitates the process.
   The details are described in Section 5.  Ideally, this process could
   be automated.  However, manual intervention would still be required
   to maintain the appropriate available reviewer list (unless reviewers
   took on the task of maintaining their data in some sort of database).
   Further details on the tools necessary to automate the entire process
   are provided in Section 8.

4.2.  IESG Telechat Review Process

   The process for reviewing documents when they appear on the IESG
   agenda:

   o  The "nearly final" IESG meeting agenda generally appears on
      Thursday night, less than one week before the IESG telechat.  The
      GenART secretary uses this as the input for the assignment
      process.
   o  For documents reviewed at IETF Last Call, a new review is only
      asked for if the document is revised.  In this case the reviewer,
      typically the person who did the Last Call review, only needs to
      check that any open issues were resolved.  Often the draft will
      not have changed between IETF LC and the IESG telechat review.
      Section 4.4 provides the step by step telechat review assignment
      process, with specific details on the maintenance of the review
      assignment data, maintained in spreadsheets detailed in section
      Section 5.

4.3.  Form of Review

   Rather than invent new guidelines, the GenART requirements for the
   form of a review stole liberally from
   draft-carpenter-solutions-sirs-01, making adaptations for the special



Barnes, et al.          Expires January 13, 2011                [Page 5]

Internet-Draft                   GenART                        July 2010


   "late, quick review" case and the nature of the General Area's
   concerns.

   Each review must start with a summary statement chosen from or
   adapted from the following list:

   o  This draft is ready for publication as a [type] RFC, where [type]
      is Informational, Experimental, etc.  (In some cases, the review
      might recommend publication as a different [type] than requested
      by the author.)
   o  This draft is basically ready for publication, but has nits that
      should be fixed before publication.
   o  This draft is on the right track but has open issues, described in
      the review.
   o  This draft has serious issues, described in the review, and needs
      to be rethought.
   o  This draft has very fundamental issues, described in the review,
      and further work is not recommended.
   o  Unfortunately, I don't have the expertise to review this draft.

   The length of a review can vary greatly according to circumstances,
   and it is considered acceptable for purely editorial comments to be
   sent privately if it's obvious that the document needs substantial
   revision.  All substantive comments, however, must be included in the
   public review.  Wherever possible, comments should be written as
   suggestions for improvement rather than as simple criticism.
   Explicit references to prior work and prior IETF discussion should be
   given whenever possible.

   Reviewers are asked to review for all kinds of problems, from basic
   architectural or security issues, Internet-wide impact, technical
   nits, problems of form and format (such as IANA Considerations or
   incorrect references), and editorial issues.  Since these reviews are
   on documents that are supposed to be finished, the review should
   consider "no issue too small" - but should cover the whole range from
   the general architectural level to the editorial level.

   All reviews should apply generally agreed IETF criteria, such as:

   o  [RFC1958]: The Architectural Principles of the Internet
   o  [RFC3426]: General Architectural and Policy Considerations
   o  [RFC3439]: Some Internet Architectural Guidelines and Philosophy
   o  ID-Checklist: The "ID checklist" document maintained by the IESG
   o  [I-D.rfc-editor-rfc2223bis]: Instructions to RFC Authors
   o  [RFC5226]: BCP 26 - Guidelines for Writing an IANA Considerations
      Section in RFCs





Barnes, et al.          Expires January 13, 2011                [Page 6]

Internet-Draft                   GenART                        July 2010


   o  [RFC3552]: Guidelines for Writing RFC Text on Security
      Considerations
   o  As well as any other applicable architectural or procedural
      documents.  It is considered important that reviews give precise
      references to such criteria when relevant to a comment.

   Of special interest to the GEN area, because it's no other area's
   special interest is:

   o  Clear description of why the document or protocol is useful to the
      Internet.
   o  Adherence to IETF formalities such as capitalized "must",
      "should", etc. in normative statements, per the ID-Checklist.
   o  Useful and reasonable IANA considerations.  Ensure that all
      necessary registries are defined/referenced and ensure definition
      and compliance with IANA assignment criteria.
   o  Correct dependencies for normative references.
   o  That it's written in reasonably clear English.
   o  Checking the updates/obsoletes information.
   o  Running idnits and checking the output.
   o  Checking that things imported by reference especially from other
      RFCs make sense (notably definitions of terms, security
      considerations, lists of criteria)and ensure they are used as
      intended by the refeenced document.
   o  Examples (eg FQDNs, telephone numbers, IP addresses) are taken
      from the right spaces.

4.4.  GenART Process Overview

   The following provides a general overview of the Gen-ART process
   along with some basic rules associated with assignments.  The very
   precise details of the secretary's process are provided in Section 5.

   o  The availability of reviewers and the order of assignments for the
      next round of Last Call document assignments is updated weekly and
      is available on the directory where all the assignments and
      reviews are cached.
   o  At telechat assignment time, all previously reviewed drafts are
      assigned to the reviewer who reviewed them previously, assuming
      that reviewer is available.  Otherwise, these documents are
      assigned to a new person in the process described below.
   o  In the case of multiple drafts grouped as a single ballot, those
      are typically divided among several reviewers unless they're very
      small (i.e., less than 20 pages).
   o  The secretary attempts to avoid assigning documents that might
      conflict with other IETF roles such as WG chairs, other
      directorates, etc.  However, in the cases where the secretary
      doesn't note the conflict, the reviewer should notify the



Barnes, et al.          Expires January 13, 2011                [Page 7]

Internet-Draft                   GenART                        July 2010


      secretary and gen-art mailing list so another reviewer may be
      assigned.
   o  It should be emphasized that assignment is never made according to
      a reviewer's technical specialty.  Even though it happens, when,
      for example, routing drafts fall on routing experts or MIBs fall
      on MIB doctors, it is coincidental.  To the reviewer, the choice
      looks random.
   o  There is an attempt to evenly distribute documents amongst
      reviewers at LC time by using a round robin process, starting from
      where the previous week's assignments stopped.
   o  Typically, there is no attempt made to actually equalize the load
      as the length and complexity of the drafts is not taken into
      account in this process.  (Thus, a reviewer could end up with a
      couple of hundred-page documents, but this is statistically rare.)
      However, in the case of a reviewer that might receive more than
      one new LC document at one time, the secretary does try to ensure
      that both are not large documents.
   o  Once the assignments are made, the web pages that list the reviews
      and the assignments are posted.  Since the telechat agenda is not
      published until the end of the day on the Thursdays prior to the
      telechats (i.e., one week prior to), the secretary needs to
      complete the assignments on that Thursday evening.  This often
      requires working later in the evening and also requires an
      Internet connection even when traveling.
   o  If the reviewers notice any problems or conflict of interest, a
      bargaining process, shifting documents from one reviewer to
      another, takes place.  The secretary updates the assignment files
      with any new assignments.
   o  Once the review has been completed the reviewer sends the review
      to the GenART list, ideally using the template provided in the
      review assignment emails.  Typically, reviews are also sent to
      authors, ADs and WG chairs/Proto Shepherds.  The only case where
      this might not be done is when there are no issues found for a re-
      review and none had been found on an initial review.  Sending the
      review to the authors, ADs and/or WG chairs/Proto Shepherds was
      originally voluntary but is now considered standard practice.
      Reviewers may also send the reviews to the IETF discussion list,
      but that is entirely at the discretion of the reviewer, in which
      case the author must be copied on the review to ensure they see
      any follow-up discussion.  Reviewers may also send the comments to
      the WG, however, this typically causes the review to end up in the
      moderation queue, as most reviewers do not want to subscribe to
      the WG lists for the documents they review.  Thus, it is expected
      that the original recipients (authors, WG chairs/PROTO or AD) may
      forward the review to the WG mailing list if they believe it is
      necessary.  In the past, sending these reviews resulted in
      confusion among the authors, who may not have been expecting a
      GenART review and may not be familiar with GenART.  Thus,



Barnes, et al.          Expires January 13, 2011                [Page 8]

Internet-Draft                   GenART                        July 2010


      reviewers are reminded to pre-pend the description of GenART and
      the purpose of the review.  This information is part of the
      standard template provided in the review assignment emails.
   o  The secretary gathers the reviews, sometimes edits them for
      format, records the review in the spreadsheet on the web pages,
      including the synopsis.  This is typically done on Thursday.  This
      is one aspect of the process that can be easily delegated such
      that one volunteer uploads all the reviews and then the secretary
      need only update the fields in the spreadsheet.  If the reviewer
      has not provided a synopsis ("Summary" field in the template), the
      secretary makes a best guess based on the review details.  Note
      that in most cases the reviewers do include a synopsis.
   o  Ideally the reviews should be posted to the gen-art mailing list
      by COB Tuesday (approximately 8 PM EST).  This is necessary to
      allow the General Area director time to consider the reviews prior
      to the telechat.  If the reviews are received after Tuesday the
      review may not be read by the AD before the IESG telechat.  Due to
      time constraints, the spreadsheets containing review summaries/
      assignments are only updated on Thursday evenings when the new LC
      assigns and upcoming telechat assignments are done.  Ideally, the
      reviews would get uploaded on the Tuesdays prior to the telechat
      along with the updated spreadsheets.
   o  If the AD concludes that the concerns raised by the reviewer
      warrant placing a DISCUSS comment on the document, the AD will do
      so, and the DISCUSS must be resolved before the document advances.
      Usually, the reviewer will be involved in the resolution process,
      but the responsibility for the DISCUSS rests with the AD.
   o


5.  Secretarial Process

   This section summarizes the details of managing the review materials,
   including the spreadsheet used to track all reviews and the HTML
   files containing the review assignments.

5.1.  Maintaining review spreadsheet

   An Open Office spreadsheet is used to enter all the documents at the
   time of assignment and to capture all the reviews.  For IETF LC
   assignments, the assignments are completed before adding the
   documents to the spreadsheet as described in Section 5.2.  For
   telechat assignments documents are obviously only added in the cases
   where there is no previous LC assignment.  For the other documents,
   the appropriate fields are updated as described in Section 5.3

   All the reviews can be accessed from the spreadsheet via hyperlinks
   from specific fields as summarized below.  The following information



Barnes, et al.          Expires January 13, 2011                [Page 9]

Internet-Draft                   GenART                        July 2010


   is maintained in the spreadsheet (in the order listed):

   1.  "Chat/LC Date": indicates either the date on which the LC review
       is due or the date of the telechat.
   2.  "Document": Filename for the text document.  This field also
       includes a hyperlink to the IETF I-D tracker.
   3.  "Assigned": Name of the reviewer assigned to that document.
   4.  "Category": This field contains one of the following self
       explanatory values: "PROTO - WG", "PROTO - Ind/AD", "Doc - WG",
       "Doc - Ind/AD", or "IETF LC".  Note that GenART does not review
       documents submitted directly to the RFC editor.  The "IETF LC"
       field is entered obviously for all documents at LC time.  It is
       changed to one of the other appropriate fields, based on the
       information in the telechat agenda
   5.  "Previous Review": This includes a link to any previous reviews.
       For example, when a doc appears on a telechat agenda, if an IETF
       LC review was done, this field is updated to "IETF-LC" and it has
       a hyperlink to the LC review.  The field is set to "New" when a
       document is first assigned/added to the spreadsheet.  In the case
       of returns, this field has a value of "Return" or "Return/
       IETF-LC" for documents for which there is an LC review.  It
       should be noted that since GenART started doing reviews at LC
       time, there seem to be far fewer returns on the agenda.
   6.  "Current Review Summary": When the field contains text, it
       includes a link to the most recent review - typically IETF LC or
       telechat.  Occasionally, a reviewer will re-review a document
       prior to its telechat assignment, in which case it is added to
       the spreadsheet but the date does not change to maintain
       consistency in the date field, since the reviews themselves
       contain the review date.

   The following summarizes the steps to add a new document to the
   spreadsheet:

   1.  In order to optimize steps, blank rows are first inserted for the
       number of new documents to be added.
   2.  To minimize data entry, a row with default fields (including the
       links for the hyperlinks) is kept at the end of the file.  There
       is a separate default row for IETF LC versus Telechat
       assignments.  This row is copied into each of the new blank rows.
       The dates are then entered (this allows the double checking that
       all documents are accounted for from the review assignments,
       especially LC).
   3.  The document name is then copied to the name field as well as
       being appended to the hyperlink for the "Review Summary" field.
       The hyperlink is included as part of the default row.  This
       minimizes the steps in enter the reviews in the spreadsheet.




Barnes, et al.          Expires January 13, 2011               [Page 10]

Internet-Draft                   GenART                        July 2010


   4.  The data is also sorted by "Chat/LC Date", "Assigned" and
       "Document".  The file is then saved and closed.
   5.  The file is then reopened and saved as HTML.
   6.  The file is opened a second time and sorted by "Assigned",
       "Chat/LC Date" and "Document" to provide the document reviewers
       an easy way to find any outstanding assignments.

5.2.  Last Call Assignment procedure

   The secretary can either cache the Last Call assignments as they are
   announced or just check the IETF announcement mailing list archives.
   The current secretary does both, double-checking the archives to
   ensure no reviews were missed.  The assignments are done on Thursday
   evening, along with any telechat assignments.  This optimizes the
   process in terms of batch changes to files.

   The assignments are listed in an HTML file.  The following are the
   steps in creating that file:

   1.  The order of assignment is actually created the week before, with
       the details below.  Thus, before starting the new assignments the
       current file is saved for editing for the following week.  The
       current filenaming convention is "reviewersyymmdd-lc.html" (e.g.,
       for July 8th, 2010, the file reviewers100708-lc.html was created
       and the file for the following week is named reviewers100715-
       lc.html).
   2.  Since the file is already prepared with the appropriate ordering
       of reviewers, the assignments are done in the order of due dates.
       The LC announcement text (starting with the document name) is
       copied into the assignment file for each of the new LC documents.
   3.  The paragraph as to the "Due Date" is shortened with the
       following text: "IETF LC ends on:", keeping the date.
   4.  Once the assignment file is complete, the new documents are added
       to the spreadsheet as described in above.
   5.  The assignment file for the next week is then updated to reflect
       the next reviewer in the round robin process, by simply cutting
       and pasting the names in the list in a block and removing any
       "one doc per month" reviewers (annotated with an "*") that have
       already received their monthly assignment.  If the next round of
       assignments occurs at the beginning of a new month, the "one doc
       per month" reviewers are added back into the list (in the normal
       "by first name alphabetical order").
   6.  The assignment files and updated spreadsheets are then cached on
       the GenART server.
   7.  An email providing a link to the assignment file, along with the
       updated spreadsheets is sent to the gen-art mailing list.  This
       email has a standard form, such that the reviewers can simply cut
       and paste the template to include the GenART context statement



Barnes, et al.          Expires January 13, 2011               [Page 11]

Internet-Draft                   GenART                        July 2010


       and link to FAQ.

5.3.  Telechat Assignment procedure

   Since LC assignments are now the starting point for GenART document
   reviews, the telechat assignments are generally straightforward as
   the majority of the documents are already in the spreadsheet.  The
   following details the steps:

   1.  The telechat agenda is typically available around 6PM PDT.  In
       order to create the assignment HTML file, the agenda is created
       from the email announcing the upcoming telechat agenda.  The
       filename has the following format, with the date corresponding to
       the telechat date (versus the data of assignment as is the case
       for last call assignments): "reviewersyymmdd".html.
   2.  Rows are added to the agenda for the reviewer's name.
   3.  The reviewers names are then added to the weekly assignment file.
   4.  As each reviewer is added to the assignment file, the review
       spreadsheet is updated as follows:
       *  "Chat/LC Date" is changed to the telechat date.
       *  The link to the LC review, if available, is copied to past as
          the link for the "Previous Review" column.
       *  If the version for the telechat is different, the link in the
          "Current Review" column is updated, so that it will point to
          the new review when available (this saves a step because the
          updating of file version is done in the same step AFTER the
          link is copied.  The "text" for the "Current Review" is
          cleared (i.e., set to the default of underscore, "_".
       *  If the version number is different, the change is also made to
          the "Document" field.  Note, this is the least critical step
          because the link in that field points to the tracker, so the
          right version should always be pulled.
   5.  In the case of a document that did not go through IETF LC, a
       reviewer is assigned using the order in the file to be used for
       Last Call assignments for the next week.
   6.  Once the reviewer(s) have been determined, the LC assignment file
       for the next week is updated.
   7.  Any new documents are then added to the spreadsheet (and the
       updates saved) per the steps as described in Section 5.1.
   8.  The assignment files and updated spreadsheets are then cached on
       the server used for GenART; currently in a "temporary" directory
       on www.softarmor.com, hosted by Dean Willis.  Ideally, these
       files would be stored on an IETF hosted server and links to
       reviews would be available in the ID tracker.
   9.  An email providing a link to the assignment file, along with the
       updated spreadsheets is sent to the gen-art mailing list.  This
       email has a standard form, such that the reviewers can simply cut
       and paste the template to include the GenART context statement



Barnes, et al.          Expires January 13, 2011               [Page 12]

Internet-Draft                   GenART                        July 2010


       and link to FAQ.

5.4.  Capturing reviews

   As noted in Section 4.4 the spreadsheet is typically updated with the
   review summaries on Thursday evenings just prior to entering the data
   for that week's LC and any Telechat assignments.  The following
   summarizes the steps to capture the reviews:

   1.  Currently, a volunteers is assisting the secretary in caching the
       email reviews as they arrive.
   2.  In the cases where the review is included inline in the body of
       the email, the review is cut and pasted into a text file and
       saved with the reviewers last name appended to the filename -
       e.g., draft-ietf-xyz-00-smith.txt .
   3.  In the case where the review is included as an attachment to the
       email, the file can be directly saved and uploaded.
   4.  The volunteer uploads the reviews by around 5pm CST on Thursdays,
       thus they are available to the secretary at the time that weeks
       assignments are done.  This sequence is necessary to ensure the
       information for documents on the upcoming telechat is up-to-date.
   5.  The review summary is entered into the text portion for the
       "Current Summary" field.  Noting that the hyperlink to the review
       (added at assignment time) will automatically work when the file
       is uploaded.
   6.  Once all the reviews have been entered and the spreadsheets
       formatted, the review spreadsheet is saved and files uploaded per
       the last three steps in Section 5.1.


6.  Results

   Over the past 6+ years, the GenART has provided reviewing services to
   3 ADs and has done around two thousand publicly available reviews.
   The reviews have been executed with a team of around a dozen full-
   time reviewers and other reviewers receiving one document assignment
   each month.  There are currently 9 reviewers in the latter category.
   The full-time reviewers receive 2-3 assignments each month.  In terms
   of improving quality, the number of documents that are now ready at
   the time of the telechat since the reviews are now initiated at LC
   time has increased.  Based on the data from 2007, there were over 250
   documents that were assigned at LC time that went through IESG
   review.  Of those 250 documents, 80% of the LC reviews were completed
   (205 documents).  Of the completed reviews about 75% (144 documents)
   were "Ready" at the time of the telechat.  Of those 144 documents,
   roughly 1/4 had been deemed "Ready" (with no nits) at LC time (based
   on a sample of 50 reviews).  For the documents that were not reviewed
   at LC time, only about 1/4 of those were deemed "Ready" when they



Barnes, et al.          Expires January 13, 2011               [Page 13]

Internet-Draft                   GenART                        July 2010


   were reviewed for the telechat.  So, doing the gen-art reviews at
   Last Call time does seem to improve the quality of the documents for
   the telechat.


7.  Impressions

   This section is divided into 3 subsections, the impressions as
   gathered from the GenART review team, the impressions of the ADs for
   whom they worked, and the impressions of the secretaries of Gen-ART.

7.1.  Reviewers' Impressions

   The following list of comments are excerpted and edited from comments
   sent in by the reviewers of GenART in response to the request:

   "We'd like to ask you each to write a few lines about your personal
   experience and lessons learned as a GenART reviewer."

   o  We really do find problems, but we don't find problems with most
      documents.
   o  Comments seem to be in three areas: editorial/grammar, editorial/
      what-the-heck-does-this-mean, and actual problems.  I'm seeing
      fewer reviews in the first category, which is a good thing.
   o  It is becoming rarer that we hear back "these guys have suffered
      enough, I'm voting no objection" (I'm remembering an LDAP document
      that had been around so long it had 2119 referenced AS A DRAFT -
      some people suffered a lot).
   o  The direct assignment of reviews is necessary and effective.  It
      does not matter much as far as I can tell what scheme is used to
      actually do the assignment.
   o  Folks are very open to the reviews that come out of GenART.  This
      somewhat surprised me because I have seen resistance to outside
      reviews in other cases.
   o  The improvements that have come about (for example one of my
      latest, the sipping conference draft - whatever the outcome) have
      made a big difference to the comprehensibility and usability of
      the documents - and provide a useful incentive to keep going.
   o  Some form of review like this is desperately needed.  While most
      of the stuff we see is good, every once in a while really bad
      errors have made their way all the way to IESG vote.
   o  Reading this stuff is interesting.  I like having a reason to read
      a wide range of materials.
   o  I am more than convinced that this can be and is a valuable
      process.  It is IMO a pity that SIRS and so on did not take off,
      because this late stage reviewing is a poor substitute for doing
      the same thing at a much earlier stage.  Very few of the drafts
      that have come past my screen are truly fully ready for IESG



Barnes, et al.          Expires January 13, 2011               [Page 14]

Internet-Draft                   GenART                        July 2010


      review.  It is actually a joy to find the occasional nugget that
      is both well written and is a proper technical job, such that the
      review really can say 'This is ready'.
   o  I have certainly found the process intellectually stimulating!  It
      encourages me to take a wider interest in what is going on in the
      IETF, but consumes a fair bit of time to do a proper job, and
      requires a very wide knowledge to be able to properly catch the
      cross-area implications: I hope (believe!) that this is something
      that one gets better at with experience and doing a few of these
      reviews.
   o  There are probably a very limited pool of people who have both the
      time and the inclination to keep on doing these reviews.  It does
      require a fair bit of dedication.
   o  It is difficult to avoid correcting the English, even if that is
      not really the point: Often really bad English (whether as a
      result of non-mother tongue authors with limited grasp or mother
      tongue authors using informal language) obscures/corrupts what is
      being said or just makes it impossible to read.
   o  Mostly authors welcome the comments: I think most of them
      understand the concept of 'ego-free reviewing' and we have
      generally been constructive rather than destructive.
   o  Part of the job of GenART is to think the unthinkable from another
      point of view, to challenge (apparently undocumented) assumptions
      and apply experience from other fields.

7.2.  General Area Directors' Impressions

   It should be noted that these impressions are from multiple General
   Area Directors' thus the "I"s are not necessarily associated with a
   specific AD.

   It's essential.  The reviewing load for the IESG <shout>DOES NOT
   SCALE</shout>.

   On a single fortnight example, the IESG had 21 drafts on the agenda.
   It is just impossible, and no wonder we sometimes miss serious
   issues.

   So I think a distributed review team with o(30) trusted reviewers
   needs to be institutionalized.  I suspect that will need to be
   formalized in a BCP sooner or later - with their reviews having a
   formal position in the standards process, and the expectation that
   the whole IESG truly reviews all documents being relaxed.

   We've learned that polite, well reasoned, constructive reviews are
   very positively received by authors and WGs.  Dismissive reviews are
   counter-productive.  And reviews sent in private eventually show up
   in public, so it's better to go public at the start.



Barnes, et al.          Expires January 13, 2011               [Page 15]

Internet-Draft                   GenART                        July 2010


   Normally, LC reviews are available in good time for the draft to be
   revised before reaching the IESG agenda.  It is important that this
   happens, except for an emergency situation where the responsible AD
   has good reason to place the draft on the agenda immediately.  In
   that case it would be preferable for the AD to inform the GenART
   team, so that the review can be expedited.

   The other problem is a big detail - between late Thursday or early
   Friday when the secretary sends out the assignments, and Wednesday
   when the General Area AD likes to start filling in ballots based on
   the reviews received by COB on Tuesday, there are only three work
   days (plus possible volunteer time over the weekend).  Now even with
   only one document to review, that may be a real challenge.
   Sometimes, a lucky reviewer will get 130 pages (e.g.
   draft-ietf-nntpext-base-27).  That doesn't compute.

   There are some mechanical issues.  The process followed is far too
   manual.  Everything needs to be robotic except for the judgment calls
   about which reviewer gets which draft.  Similarly, the reviewer
   should be able to just paste the review into a web form, click, and
   it's sent off to everyone appropriate and posted to the review site.

7.3.  GenART Secretaries' Impressions

   Serving as the secretary of GenART is a worthwhile experience.  From
   a personal point of view, it gives the secretary an easy way to track
   all of the work going through the IESG review process and see how the
   work flowed through that process.  Also, by reviewing and doing light
   editing on all of the reviews in order to create some degree of
   uniformity of presentation and to create the one line abstracts that
   go on the review web page, the secretary has an opportunity to really
   get a survey of the work being approved by the IETF.

   The nature of these reviews is informal, and originally the reviews
   were only intended for the General Area AD, though they were made
   public.  During 2004 there was little if any interaction between
   authors and reviewers.  There was some discussion during 2004 about
   trying to expand the role of GenART to a more formal, early review
   model, i.e to evolve it into a form of SIRS.  The original GenART
   secretary was against such a transformation because she felt it would
   risk something that worked.  She believed that the risk was inherent
   in formalizing the reviews and in adding mechanisms for standardizing
   the review mechanisms that would resort from formalization.  Another
   concern involves the interaction between reviewers and authors.  As
   discussed above, it has become the practice to send reviews to the
   authors with an explanation about the nature of GenART reviews.
   While it is clear that this has resulted in improved RFCs, it has
   also resulted in increased work load for the reviewers.



Barnes, et al.          Expires January 13, 2011               [Page 16]

Internet-Draft                   GenART                        July 2010


   The secretary thinks that GenART as an experiment that works well,
   but the secretary believes it is fragile.  The secretary is often
   concerned about overburdening reviewers, and feels it is her
   responsibility to keep them from burning out.  Adding additional
   reviewers to the review team would help to alleviate this concern.
   In terms of the process, adding additional reviewers has minimal
   impact.


8.  Needed Improvements

   The current size of the review team introduces a fairly heavy
   workload for the individual reviewers that are not on the "one doc
   per month" assignment cycle.  Additional reviewers would be really
   helpful to alleviate this workload.  It is also important to note
   that having additional reviewers adds minimal workload to the
   secretary's process, thus the only blocking point is finding the
   right folks that are interested in this type of volunteer role.  As
   noted in Section 7.2, 30 would be a good size for the review team.
   This would cut the workload for an individual reviewer in half (given
   the current model of 9 reviewers on the "one doc per month"
   assignment cycle).

   Obviously, automation of the process would be a good thing.  However,
   the current secretary is not highly motivated to transition to a more
   automated approach until a significant part of the process is
   automated.  In more recent consideration of this situation, it likely
   would be best to first automate the process of entering the reviews,
   as that benefits the review team as a whole.  This automation should
   allow the reviewers to enter the reviews via a web interface that
   would automatically generate the appropriate emails - quite similar
   to how the draft "Upload" tool currently works.  Also, given
   consistent naming conventions for the review forms, this step would
   automate some of the process for the secretary, as the reviews would
   automatically appear via the Spreadsheet hyperlinks, although there
   would still be a need to manually enter the summary.  But, this would
   eliminate the need to edit/normalize and upload files.  And,
   hopefully, eliminate the problem encountered with unflowed text in
   emails and getting the review properly formatted using some text
   editors.

   Section 5 was written to facilitate the process of determining tools
   requirements, by providing the very detailed steps currently applied
   to the process.  As noted above, automating the upload of the reviews
   could be a good first step.  This is somewhat starting at the end of
   the process.  However, it seems that by automating in this direction,
   we may have optimal results, since one of the earliest steps in the
   process is the task of assiging reviewers it likely needs the most



Barnes, et al.          Expires January 13, 2011               [Page 17]

Internet-Draft                   GenART                        July 2010


   manual intervention, even with tools available.

   The current security directorate (sec-dir) secretary does use some
   tools for assignments and generating assignment emails.  These tools
   could be considered for use by the GenART secretary.  Since the sec-
   dir reviews are not cached and the information maintained for those
   reviews is less detailed, there would be no reusability of that
   aspect.  However, if the GenART spreadsheet can be automatically
   populated (with assignments and completed reviews), the sec-dir may
   be able to make use of that same tool.

   A third improvement would be to move the review repository to an IETF
   hosted server.  This would provide us more reliability in terms of
   having a back-up server and is required when we automate the process.
   Thus, we should make this step a priority and the first step prior to
   any automation.


9.  Applicability

   As implemented today, the process has no formal role in the IETF
   standards process.  But as trust in the review team has built, and as
   the team itself has learned to deliver reviews that are generally
   well received, they have had a significant impact on document quality
   and on timeliness.  Rather than becoming a roadblock, they have (in
   general) allowed the General AD to feel more confident in reaching
   decisions and be more precise in resolving issues.  Since reviews now
   typically appear during IETF Last Call, the reviews like the sec-dir
   reviews are now generally expected.  So, the role of the team has
   evolved to be more formal than in the past (i.e., when this document
   was first published in 2005).  However, the handling of the reviews
   remain entirely within the scope of the ADs, PROTO shepherds, WG and
   authors as they deem appropriate.


10.  Security considerations

   Since this is an informational document about an open process, the
   security considerations are specific to the process and users
   involved in the process.  The primary concern would be to limit the
   people that have access to the GenART data/files to ensure that the
   integrity of the data is maintained.  Also, once the data is moved to
   the IETF servers, the normal IETF processes should ensure that only
   authorized individuals can access the data.  For example, each GenART
   reviewer should have a unique user name/password, just as folks do to
   access any other IETF maintained data and/or tools, as appropriate.





Barnes, et al.          Expires January 13, 2011               [Page 18]

Internet-Draft                   GenART                        July 2010


11.  IANA considerations

   As this is an informational document about an IETF process, there are
   no IANA considerations.


12.  Acknowledgments

   Initial comments were received from the members of the GenART team
   and the experiences discussed in this document were derived from
   their hard work over the last 6+: We thank the past reviewers of the
   GenART team: Mark Allman Harald Alvestrand (originator of GenART) Ron
   Bonica Gonzalo Camarillo Sharon Chisholm Lakshminath Dondeti Avri
   Doria (past secretary) Pasi Eronen Eric Gray John Loughney Lucy Lynch
   Michael Patton Robert Sparks Tom Taylor Sean Turner Christian Vogt
   Suzannne Woolf As well as the current team of reviewers/secretary:
   Mary Barnes (secretary), Richard Barnes* David Black* Scott Brim*(2)
   Ben Campbell Brian Carpenter (past GEN AD) Elwyn Davies* Spencer
   Dawkins* Francis Dupont Roni Even Miguel-Angel Garcia Dorothy Gellert
   Vijay Gurbani (assisting secretary to upload reviews) Wassim Haddad
   Joel Halpern Avashalom Houri Glenn Kowack Suresh Krishnan Peter
   McCann Enrico Marocco James Polk* Stefan Santesson*


13.  Changes since Last Version

   Changes between 02 and 03:

   1.  Updated to reflect current practices such as another volunteer
       uploading the reviews, the spreadsheet only being updated on
       Thursdays and the movement of the reviews to another server.
   2.  Updated step by step details on the secretary's process to
       facilitate tools requirements.
   3.  Updated lists of reviewers.
   4.  Added the documents which provide the baseline for reviews to the
       informational reference section.

   Changes between 01 to 02:

   1.  Updated to reflect current practices such as assignment at Last
       Call time being the norm, standard template for reviews/emails,
       new members of team, comments from current secretary, General
       Area AD and review team members.
   2.  Removed previous comments in the "Impressions" section that are
       no longer germane such as problems with no standard boilerplate,
       problems with LC reviews, etc.





Barnes, et al.          Expires January 13, 2011               [Page 19]

Internet-Draft                   GenART                        July 2010


   3.  Added step by step details on the secretary's process to
       facilitate tools requirements.
   4.  Added the documents which provide the baseline for reviews to the
       informal reference section.


14.  Informative References

   [RFC1958]  Carpenter, B., "Architectural Principles of the Internet",
              RFC 1958, June 1996.

   [RFC3426]  Floyd, S., "General Architectural and Policy
              Considerations", RFC 3426, November 2002.

   [RFC3439]  Bush, R. and D. Meyer, "Some Internet Architectural
              Guidelines and Philosophy", RFC 3439, December 2002.

   [I-D.rfc-editor-rfc2223bis]
              Reynolds, J. and R. Braden, "Instructions to Request for
              Comments (RFC) Authors", draft-rfc-editor-rfc2223bis-08
              (work in progress), July 2004.

   [RFC5226]  Narten, T. and H. Alvestrand, "Guidelines for Writing an
              IANA Considerations Section in RFCs", BCP 26, RFC 5226,
              May 2008.

   [RFC3552]  Rescorla, E. and B. Korver, "Guidelines for Writing RFC
              Text on Security Considerations", BCP 72, RFC 3552,
              July 2003.


Authors' Addresses

   Mary Barnes
   Polycom
   TX
   US

   Email: mary.ietf.barnes@gmail.com












Barnes, et al.          Expires January 13, 2011               [Page 20]

Internet-Draft                   GenART                        July 2010


   Avri Doria
   Lulea University of Technology
   Arbetsvetenskap
   Lulea
   SE-97187

   Email: avri@acm.org


   Harald Alvestrand
   Google
   Beddingen 10
   Trondheim  7014
   NO

   Email: harald@alvestrand.no


   Brian E Carpenter
   University of Auckland
   PB 92019
   Auckland, 1142
   New Zealand

   Phone:
   Email: brian.e.carpenter@gmail.com

























Barnes, et al.          Expires January 13, 2011               [Page 21]


Html markup produced by rfcmarkup 1.108, available from http://tools.ietf.org/tools/rfcmarkup/