PoPETs Artifact Review

PoPETs reviews and publishes digital artifacts related to its accepted papers. This process aids in the reproducibility of results and allows others to build on the work described in the paper. Artifact submissions are requested from authors of all accepted papers, and although they are optional, we strongly encourage you to submit your artifacts for review.

Possible artifacts include (but are not limited to):

Artifacts are evaluated by the artifact review committee. The committee evaluates the artifacts to ensure that they provide an acceptable level of utility, and feedback is given to the authors. Issues considered include software bugs, readability of documentation, and appropriate licensing. After your artifact has been approved by the committee, we will accompany the paper link on petsymposium.org with a link to the artifact along with an artifact badge so that interested readers can find and use your hard work.

Artifact Submission Guidelines

Source Code Submissions

Dataset Submissions

Artifact Badges

For PETS 2024, each accepted artifact will be granted one of the following two badges. During the submission, the authors must select which badge they want their artifacts to be evaluated against.

Artifacts Available

This "Available" badge indicates that the artifacts are publicly available at a permanent location with clear documentation on how it relates to the corresponding paper and, if applicable, how to execute the artifact/evaluation without execution error. This badge does *not* mean that the reviewers have reproduced the results. Authors whose artifacts require extremely specialized hardware or software are encouraged to choose this option. Similarly, authors whose artifacts are "not reproducible" (e.g., outcomes of surveys) should also select this option.

Artifacts Reproduced

The "Reproduced" badge indicates everything the "Available" badge does and, in addition, that the submitted artifacts reproduce the main findings of the paper. Note that this does not necessarily cover all experiments/data presented in the paper. To submit artifacts for this badge, the authors must specify the commands to run the artifacts clearly and describe how to reproduce each main finding of the paper. Also, they must highlight which results of the paper are not reproducible with the given artifacts. The artifact's quality, structure, and documentation must allow the reviewers to check whether the artifact works as claimed in the paper. Even if the authors choose this option, the review committee may request to grant only an "available" badge if the reviewers cannot reproduce the results (e.g., lack of computational resources).

What we expect from the authors of artifact submissions

To ensure a smooth submission process, please follow these important guidelines. Firstly, authors should fill out the template.md file provided and include it in their artifacts. This will help the reviewer better understand your work and ensure a seamless review process. Secondly, prompt communication is essential. Authors are kindly requested to respond to reviews and comments within a time span of two weeks. This will facilitate constructive discussions and allow for timely feedback incorporation. Lastly, in the event that changes are requested during the review process, we kindly ask authors to endeavor to incorporate them, at least partially, within two weeks after the request. Your cooperation in adhering to these guidelines will greatly contribute to the efficiency and effectiveness of our submission and review process. We eagerly anticipate receiving your high-quality contributions and look forward to showcasing your research!

What Makes a Good Review

The goal of artifact review is to help ensure the artifacts are as useful as possible. Towards this goal, the review should check for the following points. Artifact review process is interactive and we expect the authors to take into account the reviewers' comments and modify their artifacts accordingly. As such, the reviews should contain sufficient details for the authors to make the appropriate changes; for example, if the code fails, then the review should include the environment that it is run on and the error messages. After the authors have fixed the issues, they will add a comment on the submission site, at which point the reviewers can either approve the artifact or provide additional comments for another round of revision.

Volunteer for the Artifact Review Committee

We are looking for volunteers to serve on the artifact review committee. As a committee member, you will perform review of artifacts according to the guidelines above. We are looking for volunteers who will be interested in providing feedback on documentation and instructions, trying to get source code to build, or have experience with re-using published datasets. Please email artifact24@petsymposium.org to be on the review committee.

Artifact Review Committee:
Abdul Haddi Amjad, Virginia Tech
Alexandra Nisenoff, Carnegie Mellon University
Anna Lorimer, University of Chicago
Arnab Bag, imec
Benjamin Mixon-Baca, Arizona State University/Breakpointing Bad
Carolin Zoebelein
Cori Faklaris, University of North Carolina at Charlotte
Daniel Schadt, Karlsruhe Institute of Technology (KIT)
Darion Cassel, Amazon Web Services
Evangelia Anna Markatou, Brown University
Hao Cui, University of California, Irvine
Hari Venugopalan, UC Davis
Hieu Le, University of California, Irvine
Iyiola Emmanuel Olatunji, L3S Research Center, Leibniz University Hannover
Julian Todt, Karlsruhe Institute of Technology (KIT)
Karoline Busse, University of Applied Administrative Sciences Lower Saxony
Kasra Edalatnejadkhamene, EPFL
Killian Davitt, UCL
Kris Kwiatkowski, PQShield
Lachlan Gunn, Aalto University
Logan Kostick, Johns Hopkins University
Loris Reiff
Luigi Soares, Universidade Federal de Minas Gerais
Malte Wessels, TU Braunschweig
Marc Damie, Inria
Maximilian Noppel, Karlsruhe Institute of Technology (KIT)
Minh-Ha Le, Linköping University
Miti Mazmudar, University of Waterloo
Nadim Kobeissi, Polygon Labs / Symbolic Software
Naser Ezzati-Jivan, Brock University
Natasha Fernandes, Macquarie University
Nathan Reitinger, University of Maryland
Nurullah Demir, Institut for Internet Security
Panagiotis Chatzigiannis, Visa Research
Pasin Manurangsi, Google Research
Phi Hung Le, Google
Prajwal Panzade, Georgia State University
Preston Haffey, University of Calgary
Rasmus Dahlberg, Independent
Sebastian Hasler, University of Stuttgart
Shangqi Lai, CSIRO's Data61
Shashwat Jaiswal, University of Illinois Urbana-Champaign
Shijing He, King's College London
Simon Koch, Technische Universität Braunschweig
Sofía Celi, Brave
Tushar Jois, City College of New York
Vadym Doroshenko, Google
Vijayanta Jain, University of Maine
Xiao Zhan, King's College London
Yash Vekaria, University of California, Davis
Yohan Beugin, University of Wisconsin-Madison
Yuzhou Jiang, Case Western Reserve University