Talk:Tracking Reviews
From Health of Conferences Committee
Revision as of 17:06, 20 July 2006 64.93.73.62 (Talk | contribs) Starting Comments ← Previous diff |
Current revision 199.222.74.48 (Talk | contribs) |
||
Line 41: | Line 41: | ||
:No formal process in place. | :No formal process in place. | ||
- | 565095777112225332132 | + | |
- | + | ||
== Discussion Begins == | == Discussion Begins == | ||
One of the problems with the number of submissions is just why there are so many. Hence, one thing I like about tracking papers is we would get some data to see how often its a resubmittal. | One of the problems with the number of submissions is just why there are so many. Hence, one thing I like about tracking papers is we would get some data to see how often its a resubmittal. |
Current revision
To add your comment to this discussion, please click the + sign tab above. Like an email message, you can then contribute:
- a subject (use subject Re: FOO to continue a discussion of FOO)
- message body
- (optionally) your name.
Starting Comments
The most direct approach to tracking reviews has been undertaken by SIGMOD, where they have created a "pipeline" with another major database conference, VLDB, where it is common for some papers rejected at one conference are sent to the next, with their reviews carried over. The original reviewers continue to be involved. This is being done on a limited basis, only for borderline papers where it is felt that a round of author revision could lead to a solid contribution. If this is successful, SIGMOD might extend the pipeline to include IEEE's ICDE conference as well. The idea is that this will reduce repeated submission of borderline papers. More importantly, SIGMOD anticipates this will help good papers with specific problems that can be fixed, much as the journal reviewing process helps in such situations.
Often, such tracking is done informally in small communities. For example, SIGACT has at least one member for FOCS usually on STOC, so that a paper that it is submitted to both can be compared.
Another interesting idea is to track reviewers, to detect chronically irresponsible, weak or unfair reviewers. SIGIR is beginning to track reviewer quality, and is so far just in the data gathering stage.
Note that paper tracking is made more difficult by double blind reviewing.
SIGGRAPH
- SIGGRAPH uses a form of voluntary review tracking. Authors can voluntarily request that their reviews of a rejected paper be forwarded to other venues, including next year's conference. They can then write a cover letter indicating how they addressed the reviewers' concerns. The hope is that this makes subsequent reviewing much easier. The scheme is voluntary so as to give authors the chance to escape from reviewers they may judge as biased or uninformed.
- In a variant of this scheme, some authors are given a conditional accept to ACM TOG based on their conference-submission reviews. This is typically done for those small number of papers that are of high quality, but not quite complete. In effect, it means that the conference reviewing is the first round of a standard journal review, hence it's a form of review tracking from the conference to the journal.
SIGUCCS
- tracking reviews of rejected papers from conference to conference as is done in journal reviewing - We don't have a formal mechanism for this (which we think is a good idea!) but since several folks serve on the program committee year after year there is probably some informal check on this. Also, acceptance of a paper is tied to presentation at the conference.
SIGMOD
- SIGMOD has done the following:
- (1) Created a "pipeline" with another major DB conference, VLDB, where some papers rejected at one conference are sent to the next, with their reviews carried over. The original reviewers continue to be involved. This is being done on a limited basis, only for borderline papers where it is felt that a round of author revision could lead to a solid contribution. If this is successful, we might extend the pipeline to include IEEE's ICDE conference as well. The idea is that this will reduce repeated submission of borderline papers. More importantly, we hope this will help good papers with specific problems that can be fixed, much as the journal reviewing process helps in such situations. ...
SIGDA
- Yes, but everything is done manually and ad-hoc (i.e., it happens that some TPC member in one conference saw this paper in another conference). That's why I think support from ACM Pubs Dept. in being able to query existing submissions (not only just rejects) across conferences would be good. ...
SIGSIM
- Programme committee sizes are growing. We tend to track rejected papers by knowledge sharing between common PC members.
ICSE
- The idea of tracking reviews has been discussed, but nothing has yet been implemented.
OOPSLA
- No formal process in place.
Discussion Begins
One of the problems with the number of submissions is just why there are so many. Hence, one thing I like about tracking papers is we would get some data to see how often its a resubmittal.
I also think we should start tracking reviewers.
Those of us who teach are used to getting reviews from out customers (students) on our performance. I think it would be great for authors to review the reviewers, asking them questions about the quality of the review. Examples include:
- How carefully did the reviewer read the paper? [1] skim ... [4] fine toothed comb
- How much detail did the reviewer put into the suggestions? [1] very vague ... [4] thorough
- What was the attitude of the reviewer in the comments? [1] rude ... [4] constructive criticism
We should collect and distribute the review of the reviewers before the program committee meets, but then pass this along too.
Dave Patterson
I like the idea of giving reviewers some feedback. We've all certainly gotten our share of shallow reviews with flip unsubstantiated reasons for rejection. But reviewers are hard to come by, so we'd want to make sure that this is balanced and doesn't discourage good reviewers. Also, most bad reviewers are likely to ignore any corrective feedback. One thing about rebuttals is that authors have to be on their best behavior since the decision on the paper is still pending. What if an author gives a bad review to a PC member reviewer who sees the feedback before the PC meeting? Is the paper hosed? We might get better feedback if it was given before the paper decision was communicated but in a way that was anonymized except for the PC chair during selection. Tracking reviewers between conferences raises privacy issues - just like credit scores how are mistakes fixed, what if the paper is in a area outside the reviewer's expertise, etc? Norm Jouppi (SIGARCH)