Skip to content

Instantly share code, notes, and snippets.

@lreading
Created March 8, 2026 18:53
Show Gist options
  • Select an option

  • Save lreading/fc43d0660fb486fdd2d01f0b69a2c708 to your computer and use it in GitHub Desktop.

Select an option

Save lreading/fc43d0660fb486fdd2d01f0b69a2c708 to your computer and use it in GitHub Desktop.
AI Transparency and Traceability in Open Source: OpenAI Codex

Generative AI in Open Source

Many open source projects are suffering from an influx of AI generated content that is creating an undue burden on project maintainers. This comes in many forms, but one of them is pull requests that do not follow existing standards, patterns, and are often incomplete, broken, buggy, or don't address the underlying bug/feature request. As a result, many open source projects are adopting AI Policies - be it in their contributing guidelines or elsewhere.

AI Contribution Policies

An emerging pattern is using a PR Template to ask authors to disclose their AI usage, including the tools, providers, models, and prompts that were used to develop code. The prompt requirements is what interested me: the ask usually does not require a complete log of prompts, but instead focuses on why types of things you asked the AI to do. This level of transparency is helpful not only for the reviewers, but also for other developers who are leveraging AI tooling. Having transparency around prompts can help others improve their AI assisted contributions, while staying in compliance with a repository's AI Policies.

OpenAI Codex

I've been using OpenAI Codex a bit for more tedious work, or work that requires deeper investigation. While I remain fully accountable for all changes and never blindly trust AI generated code, I thought it would be interesting to create a workflow where I can not only disclose the types of prompts used, but an actual audit log.

This is not a native feature for OpenAI's Codex, however, we can derive a full log of all user messages from Codex's logs. Moving forward, Moving forward, I intend to leverage this pattern to provide extreme transparency around my use of AI. I hope that other contributors adopt similar patterns when contributing to repositories with similar AI policies. If done well, we can reduce the burden on open sourcce maintainers, maintain transparency, and increase the safe and responsible adoption of generative AI in open source.

Codex User Prompt Log

OpenAI Codex maintains logs, on unix-based systems, in ~/.codex/sessions/<year>/<month>/<day>/rollout-<date>-<guid>.jsonl. It follows a similar pattern on Windows.

These logs include lines for things like agent responses, thinking, tasks, tools, executions, ai messages, and user messages. We can leverage these logs to generate a complete log of user prompts. This Gist will leverage the jq utility, but a similar approach can be used on other operating systems.

Guidelines

  • Keep sessions scoped to a single feature/bug fix.
  • If you use multiple sesions, make note of it ahead of time.
  • If you leave and restart a session later or on a different day, take note of the session id to help find the log
  • Remember that you are intentionally sharing your pompts. Never disclose sensitive data

Generating the Log

  1. Find the session log file: ~/.codex/sessions/<year>/<month>/<day>/rollout-<date>-<guid>.jsonl
  2. CD into the directory
  3. Run the following jw command - update the file name to match your session file
jq -r 'select(.type == "event_msg" and .payload.type == "user_message") .payload.message' rollout-<date>-<guid>.jsonl

This will return a list of all of your user prompts. This is the valid jsonl schema as of Codex v0.111.0 - I cannot make guarantees about future versions of Codex's log schema.

Disclosing the Use of Generative AI

If asked to provide a description or sample of the prompts being used, create the high level summary and provide that as asked. Maintainers may not want to review all prompts - they may just want to ensure that the prompts align with the work that was intended to be done. To provide additional, optional, transparency, we can add a Collapsed Section. This prevents the PR description from becomming too large, but still gives the option for a maintainer to review your wok.

Example:

User Prompt Log
I want to add a dark mode to this website.  Create a toggle button in the navbar, and generate color schemes in both light and dark modes.  Default to the last saved preference, and fall-back to the user's operating system theme. Persist the preference in localStorage.

Looking good!  The padding or margins are off on the toggle button.  Also, the SVG in the hero needs to have it's stroke fill updated in dark mode, it's hard to see right now.
  
Awesome! Can you please make the text color in dark mode just slightly lighter? I'm wanting a bit more contrast.

This is not an RFC or a new standards proposal - this is an expirement I'm conducting for my own contributions. You are welcome to join the experiment! If you do, please let me know how it went for you, and if this pattern (or a similar pattern) has actually added any value, or if it was more overhead than it was worth.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment