Shining some Sunlight on Conference Reviews

There’s ongoing discussion of our conference review process.  Unsurprisingly it seems to spike around the time people get reviews of their own work.  A lot of the griping tends to assert that reviewers aren’t doing a good job.  Some argue for abandoning the pre-publication review process in favor of accepting everything and reviewing post-publication.  But I’m a big supporter of the model where experts help authors improve their work and help filter material so I know where to look for the good work.

Without abandoning it, I think there are some great opportunities to improve the review process by adding some transparency.  I’d like to address three problems with the current pre-review process:

  1. Reviewers know that their work will never be seen, so are missing all the external motivators (praise for good work, scorn for bad) for doing a good job; only altruism remains.
  2. Authors who feel unfairly reviewed have to shut up and take it
  3. New reviewers have no real models of good work to emulate

To address these, I propose the following addition to the review process.   When reviewers submit a review, give them a box they can check if they are willing to have their review published, specifying anonymity or in their own name.  Similarly, when authors read reviews, give them a checkbox to indicate their willingness to have those reviews made public, as well as a check-box to indicate their willingness for their initial submission to be made public.  These permissions could be used in a number of interesting ways.

First, reviewers with permission of authors could begin to maintain/publish a corpus of the reviews they’ve written.  Even when I end up as Reviewer 2, I try to invest a lot of work in making a good review, and I’d love to be able to show that to other people.  A good reviewer could get credit in the community for their thoughtful reviews.  I suspect having a record like this would also make that reviewer a first choice for reviewing the most interesting submissions in our field (if we wanted to add incentive, we could give such high quality reviewers first dibs on bidding to review specific papers).

Second, authors who disagreed with their reviews could post responses to them.  It wouldn’t change the outcome, but it could certainly give the authors some satisfaction.  And the feedback, if valid, might help the reviewer become a better reviewer in the future.

Perhaps most important, these public reviews and submissions could become a valuable education resource.  New reviewers could be directed to a corpus of high quality reviews that they could use as models for their own.  This would work even better in combination with publication of the original and final submissions.  I’ve been asked to help teach a class on reviewing next semester; I would love to be able to take some original submissions, ask the students to review them, then show the students the real reviews and, finally, the paper that was ultimately published.  We should be able to see the kinds of problems that reviewers noticed, and the ways the authors chose to deal with those problems in revision.

These are just the direct benefits; there are also strong indirect ones.  Even if optional, that waiting check-box will be on reviewers’ minds as they review; they can take the easy way out, but knowing that there’s a community desire for openness might encourage them to work a little harder at that review in order to make it public.  It won’t fix the really awful reviewers but will lead those on the boundary to do a little more.  This might not just mean reading the paper more carefully, but also taking the time to make sure that the tone of the review is respectful and encouraging of the authors.  Conversely, knowing that the community would prefer to publish the original submission might encourage authors to deliver one of higher quality.  Again, those who plan to have the committee do the authors’ work for them, framing the problem, finding related work, and suggesting better experiments, aren’t going to check that box, but those on the borderline might decide to wait for the next conference if their paper isn’t ready yet.

Some years ago I went to a HOTNETs conference where each paper was preceded by a commentary from one of the PC members.  If reviews could be published, it could be quite interesting to include them in the proceedings with each paper, to serve just that commentary role.  People who signed their reviews could get some nice credit for insightful commentary, and we could even award prizes to the best ones.

An important consideration is that a reviewer permitting their review to be published isn’t sufficient—the paper author has to agree as well.  Otherwise, the review might leak information about the submission that the author isn’t willing to share.

Are there downsides?  We might worry about increasing conflicts of interest.  A junior member might be worried about being associated with a negative review of a powerful colleague.  But I think this can often be addressed by allowing anonymous publication of those reviews.  And there’s always the option to keep reviews private.

In summary, I think this is an approach that creates several opportunities without forcing any undesired change.  I know that I would often be quite eager to share my own reviews (they become a useful commentary on the state of the art), and comfortable sharing most of the reviews I’ve received and my preliminary submissions.  How about you?

 

 

 

3 Responses to “Shining some Sunlight on Conference Reviews”

  • James Coyne says:

    In what fairytale kingdom does the author of this blog live?

    There is overwhelming evidence that editors do not simply assist authors to improve the quality of their research. Rather, they enforce confirmatory bias and are particularly inclined to publish articles in hot topic areas, regardless of their hype, distortion, and unlikelihood of being replicated.

  • David Karger says:

    It’s hard to refute evidence that isn’t provided. But I know that many of my papers have been improved by reviewer comments (some that went through 3 rounds of rejection before acceptance).

  • Juho Kim says:

    Learning to be a good reviewer is hard. I think a vast majority researchers never get a good chance to learn. They mostly rely on their limited experiences from seeing other reviewers’ work and adapt to it. As a student, I find the idea of a mock PC/conference to be particularly valuable, also mentioned in http://greatresearch.org/2013/10/18/the-paper-reviewing-process/ . It’d be great to have a short course like this (during IAP maybe). Learning how the internals of an academic conference works definitely helped me plan and write my paper better.

    Another important challenge is assigning right reviewers for each paper. I don’t know how exactly this match-making works in different conferences, but there might be more data- and community-driven ways than committee members reaching out to whoever they think are suitable.