The PoPETs Experiment (2019)
After many months of conversations with members of our community regarding
the health of the reviewing process in security and privacy conferences we
think that it would be a good idea to repeat the famous NIPS consistency
experiment.
For those who are not familiar, the NIPS experiment was carried out in 2014
at the NIPS conference with the goal of quantifying randomness in the review
process. They split the program committee into two independent program
committees. Then, 90% of the papers were assigned to either one of the PCs, and 10% were reviewed by both. This enabled them to observe how consistent the
PCs were. The results were (not?) surprising: of the 166 papers, the PCs
disagreed on the decision in ~25%.
We feel that PoPETs is in a great position to repeat this experiment and get
insights about the randomness within reviews in the security domain.
We will organize the experiment as follows:
- We will split the PC in two: PC-A and PC-B, making sure
(manually) that both of PCs contain representative expertise in the topics
relevant to PoPETS. Both PCs are composed of members of the program
committee/editorial board listed in the CFP
- When the papers arrive we will select 20 papers that will
be reviewed by both PCs, and the remainder will be assigned to one of the
two committees uniformly at random.
- Resubmissions will not be chosen for the experiment, regardless of
the previous decision.
- We will duplicate the HotCRP instance, and distribute the
papers accordingly. The members of PC-A will be assigned to PoPETS2019 and
the members of PC-B will be assigned to PoPETS2019-B.
- Members of each PC will not know whether a paper they
review is also being reviewed by the other PC or not.
- Both PCs will run as usual, with the same phases and
deadlines.
- We trust you to not transmit information from one PC
to the other. (The only case when information transfer might be
desired is when one PC discovers a fundamental technical flaw/attack on
a paper; such cases will be handled by the Chairs.)
- At the end of the decision phase, we will take the best
of the two decisions for the paper (i.e. always benefit the authors). This
is important to avoid making the authors feeling uneasy with the
experiment.
- Note that after the decisions are sent, reviews will be made available to all program committee/editorial board members (except for those members with conflicts).
- In case of Major Revision:
- If only one PC decided MR, when the paper returns it
will be assigned to single review by the same set of reviewers that
issued the MR decision.
- Changed*
If both PCs decided MR, we will randomly select one of the PCs
we will randomly select one of the PCs to review the revised version,
and thus the authors will need to follow that PC's meta-review and comments.
However, in good faith, we expect the authors to also take into account
the second PC's reviews. For reference, we will make the second
PC's reviews and meta-review available to the PC that is
responsible for the revised version.
(*)Initially, we planned for the Chairs to merge the meta-review
in agreement with the reviewers. Under execution, merging the meta-reviews
proved to be infeasible.
To ensure that the duplication of papers does not impose a high load on the
PC, we have composed a larger PC than in previous years. Also, if the number
of submissions grows too much we will suspend the experiment or reduce the
number of papers reviewed by both committees.
Authors will be informed of the experiment upon submission so that they can
withdraw if desired. They will be aware of whether their paper will get two
sets of reviews or not (as they will have to do the rebuttal twice anyway). We thank the authors for participating in the experiment, and for tolerating the extra work caused by the double reviews.
We hope that you find this experiment as exciting as we do. Please do tell
us your feedback and questions. We want to make this a great experience for
everyone.
Looking forward to a successful PoPETs 2019!
Kostas and Carmela
pets19-chairs@petsymposium.org