Talk:Accepting More Papers
From Health of Conferences Committee
Revision as of 16:58, 20 July 2006 64.93.73.62 (Talk | contribs) Starting Comments ← Previous diff |
Revision as of 16:01, 21 July 2006 199.222.74.48 (Talk | contribs) Next diff → |
||
Line 27: | Line 27: | ||
'''OOPSLA''' | '''OOPSLA''' | ||
:OOPSLA has recently introduced different categories to the papers track (e.g., Essays, selected papers from the OnWard! track that pass the same technical papers review criteria). | :OOPSLA has recently introduced different categories to the papers track (e.g., Essays, selected papers from the OnWard! track that pass the same technical papers review criteria). | ||
- | 9170526144115994464793 | + | |
== Discussion Begins == | == Discussion Begins == |
Revision as of 16:01, 21 July 2006
To add your comment to this discussion, please click the + sign tab above. Like an email message, you can then contribute:
- a subject (use subject Re: FOO to continue a discussion of FOO)
- message body
- (optionally) your name.
Starting Comments
Some argue that too-high acceptance rates (e.g., 40%) don't challenge the field enough; while too-low acceptance rates (< 15%) enourage too much conservativism in program committees. Thus, as a field grows some feel the paper publishing opportunities should also grow to keep acceptance reasonable (e.g, 20-30%).
SIGPLAN
- There also seems to be a sense that the conference/journal system is broken. At least a vocal minority think that our community place too much importance on conference papers. This group thinks we need to improve the journal response rate, make journal publication meaningful, and increase the acceptance rate at conferences significantly.
SIGMETRICS
- Sigmetrics has increased the committee size (since we are suppose to review all the papers ourself) and increased a bit the number of accepted papers (one year we had 2 parallel sessions and we've had longer days), so basically we can do a bit over 30 papers these days (rather than say around 20-25). ...
SIGACCESS
- For the '05 conference, there were more submissions than expected and the number of manuscripts assigned to each reviewer was very large (about 7). Not surprisingly, this created unhappy members of the Program Committee who had not expected so many manuscripts to review. To deal with this, the PC Chair indicated that committee members could ask colleagues to review some of the papers in areas in which their colleagues were particularly qualified. These people who were asked for special reviews were listed separately in the conference Proceedings.
- While this system worked post hoc for the situation, it is not ideal. For '06, a larger PC has been created. The conference has not resorted to charging a reviewing fee or tracking reviews from conference to conference. We have, however, worked to better describe criteria for acceptable papers in the hopes of reducing the number of unqualified submissions. For example, in the past there has always been a tendency for authors to submit papers about proposals they have that have not been tested sufficiently. The CFP for '06 makes clear that testing with user populations will be important in the evaluation of submitted manuscripts.
SIGGRAPH
- Even though submissions to the SIGGRAPH Papers Program has increased by 50% over the past few years, the acceptance rate has stayed fairly constant at 18%-20%. However, other programs at the conference have experienced growth during this time, such as the Sketches and Applications Program, which is essentially a short-papers program.
OOPSLA
- OOPSLA has recently introduced different categories to the papers track (e.g., Essays, selected papers from the OnWard! track that pass the same technical papers review criteria).
Discussion Begins
Mike Franklin at Berkeley told me that SIGMOD has been giving a "Test of Time" award for several years now (best paper at the conference held 10 years ago) as well as an immediation best paper award, so they are able to line up to see if there are correlations.
Basically, its rare when the two match. The argument is that they papers with the very best numerical reviews may follow conventional wisdom in the field, and be well very well done. (All the committees I've been on that pick a best paper look at the top few with the best revieiwing numbers) Something more disruptive is more likely to get at least a few reviewers who disagree, which knocks it out of the best paper sweepstakes.
This matches my ancedote about the Google paper on distributed MapReduce not getting the best paper award at an OS confernence, but I'm confident that it will stand the test of time extremely well.
To me this adds to the arguments about increasing the number of papers that are accepted, in that provocative, important papers are not likely to get uniformly glowing reviews. My view would be that if at least one member of a program committee is willing to champion a paper that has mixed reviews, it should be seriously considered.
Dave Patterson
I think the 20-30% acceptance rate goal mentioned above is good. In 2005 at ISCA there was a concerted effort to accept more papers and the conference was fully double-tracked (instead of partial double tracked). I think the diversity of the program benefited significantly as a result, and it had a positive impact on true quality (that stands the test of time). Often many top ranked papers are just non-controversial papers that have lots of quantitative data. Norm Jouppi (SIGARCH)