Skip to content

Instantly share code, notes, and snippets.

@pylls
Created February 18, 2026 09:10
Show Gist options
  • Select an option

  • Save pylls/3ced2cd5a95dd21aeb0c2d4a7c0de7fd to your computer and use it in GitHub Desktop.

Select an option

Save pylls/3ced2cd5a95dd21aeb0c2d4a7c0de7fd to your computer and use it in GitHub Desktop.

PoPETs 2027 AI Policy (Proposal)

Heavily inspired by the ACM Policy on Authorship, as used by CCS 2026, and the USENIX Security ‘26 Between-Cycle Transparency Report.

Violations of this AI policy may lead to quick (desk) rejection, PC removal, or, in serious cases, program chairs reporting offending authors or reviewers to their institutions for further investigation.

Authors

Three guiding principles:

  • Authors retain full responsibility for the accuracy, originality, and integrity of the submitted paper and work described therein.
  • Mandatory disclosure of AI use; full transparency towards reviewers upon submission (see HotCRP submission form) and the broader community once published (see PoPETs 2027 LaTeX template, does not count towards page limit).
  • Hallucinations, fabrication, omissions, and falsification are treated as research misconduct.

In full compliance with the three principles above, the use of generative AI is permitted. Generative AI tools cannot be listed as authors of papers submitted to PoPETs.

Remember to be extremely careful when using AI for references and practical bibliography management (e.g., creating BibTeX entries). As the community gets to grips with these technologies, provide sufficient detail in your disclosure of AI use; better safe than sorry.

Reviewers/Editors

Reviewers cannot upload submissions or any parts of them to any third-party service, be it AI-related or not. The substance of reviews must originate with the reviewers themselves, not (local) AI tools or sub-reviewers.

Use great care not to disclose any parts of submissions if reviews are, e.g., spell-checked to improve language quality by third-party services such as Grammarly or AI integrations like Microsoft 365 Copilot. Disclosing any information about a submission to any third party, such as ChatGPT, violates the submission's confidentiality and will be treated as research misconduct.

@robgjansen
Copy link

I was expecting to see some explicit examples, e.g., of things that will cause your paper to be desk rejected. It states that hallucinations will be considered research misconduct, but it does not state how hallucinations will be determined, what happens when found, etc. Is that intended, to allow you flexibility in how you decide to implement the principles?

If you don't want to proscribe specific enforcement action because you fear it will tie you to a certain process and it will not allow you to adapt over time, then you could at least give examples of the types of enforcement actions that might be considered. For example, if hallucinations are found in the references, we will take an appropriate enforcement action which might include immediate desk rejection, notification of department chair, a lifetime ban on submitting to PoPETs, ... something like that?

@robgjansen
Copy link

Oh also, is there a way for authors to fight a claim? What if I don't use AI at all but somehow get tagged as a hallucinator? Or what if it is a hallucination but I claim it is just a student error?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment