Q1: Medium Conferences

From Health of Conferences Committee

Revision as of 23:48, 12 July 2006; view current revision
←Older revision | Newer revision→

Question 1: REVIEWER LOAD.

Has your community recently adopted new practices to deal with growing reviewer load, such as:

  • tracking reviews of rejected papers from conference to conference as is done in journal reviewing
  • increasing program committee size
  • charging a review fee
  • others?

For each practice you are using, what is your view of how well it is working within your community? Please comment on the merit of the other strategies as applies to your community.


SIGART

Increased the size of the PC. Used a 2 layer approach, with Senior Program Committee (SPC) members supervising the work of PC members and coordinating the discussion among the reviewers of each paper to attempt to reach consensus.

SIGARCH

tracking reviews of rejected papers from conference to conference as is done in journal reviewing
No
increasing program committee size
Sometimes
charging a review fee
No


SIGART

The PC/SPC approach works reasonably well. There are still some inconsistencies in the quality of what gets accepted, but fewer than without the SPC.


SIGCHI

MOST OF OUR HIGHLY-SELECTIVE CONFERENCES USE A PROCESS WHERE THE PROGRAM COMMITTEE (USUALLY ABOUT ONE PERSON FOR EVERY 10 SUBMISSIONS) IS RESPONSIBLE FOR MANAGING THE REVIEWS (EITHER SOLICITING REVIEWERS OR MANAGING ONES ASSIGNED FROM A REVIEWER POOL) AND COMPILING A META-REVIEW. A SECOND PC MEMBER MAY BE CALLED UPON FOR A SECOND OPINION IN THE PC MEETING, AND IF NEEDED, FOR A REVIEW.
AS THE NUMBER OF SUBMISSIONS INCREASES, WE'VE SEEN SOME OF OUR PROGRAM COMMITTEES GROW. WE'VE ALSO SEEN SOME OF THEM RELAX THE PROCESS OF SECONDARY REVIEWS SO THAT ONLY CONTROVERSIAL PAPERS GET A SECOND REVIEW.
ALSO, IN OUR AREA, WE'VE SEEN A PHENOMENON PARALLEL TO INCREASING SUBMISSIONS--AN INCREASE IN THE NUMBER OF NEW CONFERENCES AND OTHER VENUES.
IN SOME WAYS, THIS HAS HELPED TO MODERATE THE NUMBER OF SUBMISSIONS TO OUR OLDER CONFERENCES.


SIGIR

SIGIR uses a two-tier Program Committee. Ordinary members (reviewers) review in the usual way. Each paper gets 3 reviews. We try to keep a reviewer's load in the 6-8 papers range; 6 is the target, but sometimes it drifts up. Our PC has grown considerably in order to maintain this load, but that's okay; we view it as being more inclusive, and making it easier for junior members to get involved earlier in their careers. Area Coordinators are senior / more experienced members. They are responsible for monitoring the reviews of a larger number of papers (perhaps 12-15), to ensure review quality, and to resolve major differences of opinion among reviewers prior to the Area Coordinators meeting. SIGIR still uses a face-to-face meeting of Area Coordinators to make the final selection of papers. It is expensive, but we believe that it helps maintain some consistency across the different reviewers and Area Coordinators.
We are also beginning to track reviewer quality, so that we can mentor or weed out bad reviewers in the future; so far we're just in the data gathering stage.
We feel that this approach works well. However, as our submissions have exploded, the Area Coordinators meeting is getting rather large. We are worried that we may need to make adjustments in the next 1-3 years.
We do not track papers from conference to conference. Our main conferences are still too independent to make that practical. We would be violently opposed to charging a reviewing fee.


SIGACT

Tracking is done informally because we have overlap on our various conferences. At least one member for FOCS is usually on STOC.
The committee size has increased gradually. :The committee members use subreviewers for papers that are not in their area.
Subreviewers are recoginized in the proceceedings for some conferences.


SIGPLAN

We have slightly increased the size of some of our program committees in response to larger numbers of submissions to keep the reviewing load manageable. My sense is that the community does not view reviewing load for program committee members as a problem at this time.


SIGMOD

SIGMOD has done the following:
(1) Created a "pipeline" with another major DB conference, VLDB, where some papers rejected at one conference are sent to the next, with their reviews carried over. The original reviewers continue to be involved. This is being done on a limited basis, only for borderline papers where it is felt that a round of author revision could lead to a solid contribution. If this is successful, we might extend the pipeline to include IEEE's ICDE conference as well. The idea is that this will reduce repeated submission of borderline papers. More importantly, we hope this will help good papers with specific problems that can be fixed, much as the journal reviewing process helps in such situations.
(2) Increased PC size moderately.
(3) Introduced a 2-phase review process where all papers are assigned two reviewers in Phase 1, and only papers with at least one positive reviewer are assigned a third reviewer. This is a compromise that allows the maximum reviewing resources to be devoted to those papers that are in serious contention.
With regard to how well it is working, Too early to tell, I think.


SIGCOMM

We have experimented with larger program commitees, as well as heavier use of outside reviewers. For SIGCOMM'06, we are experimenting with a two-tier PC, including a "PC lite" that does reviews to help narrow down the set of serious contenders but does not attend the PC meeting, and "PC heavy" that will do reviews and attend the PC meeting (and hopefully each review a large fraction of the papers in serious contention). Tom Anderson and Nick McKeown are running the process.
For several years, SIGCOMM's main conference has had a "quick reject" process, where some fraction (typically 10-20%) of papers are rejected at an early stage based on one PC member's view (the lead reviewer) and a double-check by the PC chairs. This is to get rid of papers with a serious flaw -- out of scope, lacking an evaluation, clearly non-novel, etc. (i.e., only things that the PC member could determine pretty quickly, without requiring an in-depth read of the paper).
This year, SIGCOMM will be accepting more papers, while remaining a single-track event, by having shorter talks (e.g., 20 vs. 30 minutes), to help address the sub-10% acceptance rates that have continued to plaque us. We have also, during the last 5-6 years, spawned (or co-spawned) several other events that try to accommodate the volume of papers and address the needs of emerging sub-fields. Examples include NSDI, ANCS, HotNets, Internet Measurement Conference, etc., and our "in cooperation with" status with CoNext, as well as the four workshops we have co-located with SIGCOMM each year (which change from year to year).
To help train future PC members, we've started having shadow PCs for the main conference. See article at
http://portal.acm.org/ft_gateway.cfm?id=1070889&type=pdf&coll=portal&dl=ACM& CFID=48579656&CFTOKEN=2497629
from the July 2005 issue of SIGCOMM Computer Communications Review. Scott Shenker and Alex Snoeren are coordinating shadow PCs for SIGCOMM'06, where any school can request to run their own shadow PC.
We're still in the stage of experimenting, struggling with the fact that large PCs lead to difficulties in calibrating across papers. Creating separate events has been very fruitful, in that we have helped foster new communities. The Internet Measurement Conference is a great example of that. The workshops co-located with SIGCOMM have been good for that as well, as they offer a low-risk way for folks to experiment with a new workshop topic that, if successful, can blossom into its own stand-alone event down the road. This also helps increase the number and breadth of folks that attend the conference, hopefully leading to a broader pool of authors of accepted papers down the road. (Like so many conferences, we have a reputation of being a bit of an insider community, and certainly the vast majority of the SIGCOMM conference papers come from a relatively small set of U.S. schools and research labs, though that is gradually changing.)


SIGKDD

tracking reviews of rejected papers from conference to conference as is done in journal reviewing
increasing program committee size
YES
charging a review fee
NO
others?
We are considering area chairs
We did increase program committee size. But management and evaluation of so many papers and 1000+ reviews was difficult because it is harder to ensure that the papers are appropriately assigned and more difficult to ensure quality and consistancy of all reviews with a larger program committee.
We recommended that this years Program Chairs use a more hierarchical programs stucture so that there can be better oversight of the reviewing and discussion processes.


SIGOS

The idea of tracking rejected papers is brought up periodically. SOSP may stop having double blind reviews to help support tracking.
PC sizes have gone up.
DSN (Dependable Systems and Networks; not an ACM conference) has long used external reviewers liberally. This year, they have included the external reviewers in bulletin board discussions - creating a pre-PC meeting in which they can participate.
Larger PCs come with their own problems, of course.
It's too soon to say anything about DSN's experiment.
I've never heard the idea of charging for reviews. I think that this would be a very unpopular idea.


SIGMOBILE

Reviewer load has certainly been a problem, as the number of submissions to our conferences remains high and grows.
We have been tending to increase the size of the program committee over the years to deal with this, but we have recently concluded that this creates its own problem. Such a large (50-60 members) creates a situation where, when papers are discussed at the PC meeting, each PC member probably only reviewed a couple of the papers that get discussed (we can only discuss the ones that got reasonably good reviews). With a smaller PC, each person would have reviewed more of the discussed papers and would thus be more involved in the meeting itself. Currently, PC members find the meeting boring since they have only read such a small fraction of the discussed papers.
To help with the load, we have in the past few years been using a "quick reject" procedure. Papers that are obviously out of scope, way over length, clearly not original, etc., can get rejected with a single quick review. There is a danger of course about making a mistake in quick rejecting a paper, but we have so far not had a problem that we know of with this.
I've heard the suggestion of charging for submissions before, but I don't see how it can really work fairly. I don't like the idea.

52133673268222455957278