Heavily inspired by the ACM Policy on Authorship, as used by CCS 2026, and the USENIX Security ‘26 Between-Cycle Transparency Report.
Violations of this AI policy may lead to quick (desk) rejection, PC removal, or, in serious cases, program chairs reporting offending authors or reviewers to their institutions for further investigation.
Three guiding principles:
- Authors retain full responsibility for the accuracy, originality, and integrity of the submitted paper and work described therein.
- Mandatory disclosure of AI use; full transparency towards reviewers upon submission (see HotCRP submission form) and the broader community once published (see PoPETs 2027 LaTeX template, does not count towards page limit).
- Hallucinations, fabrication, omissions, and falsification are treated as research misconduct.
In full compliance with the three principles above, the use of generative AI is permitted. Generative AI tools cannot be listed as authors of papers submitted to PoPETs.
Remember to be extremely careful when using AI for references and practical bibliography management (e.g., creating BibTeX entries). As the community gets to grips with these technologies, provide sufficient detail in your disclosure of AI use; better safe than sorry.
Reviewers cannot upload submissions or any parts of them to any third-party service, be it AI-related or not. The substance of reviews must originate with the reviewers themselves, not (local) AI tools or sub-reviewers.
Use great care not to disclose any parts of submissions if reviews are, e.g., spell-checked to improve language quality by third-party services such as Grammarly or AI integrations like Microsoft 365 Copilot. Disclosing any information about a submission to any third party, such as ChatGPT, violates the submission's confidentiality and will be treated as research misconduct.
Regarding artifacts: