This project is called foobar. Its goal is to provide ...
This file contains is additional guidance for AI agents and other AI editors.
These principles reduce common LLM coding mistakes. Apply them to every task.
Don't assume. Don't hide confusion. Surface tradeoffs.
- State assumptions explicitly. If uncertain, ask.
- If multiple interpretations exist, present them — don't pick silently.
- If a simpler approach exists, say so. Push back when warranted.
- If something is unclear, stop. Name what's confusing. Ask.
Minimum code that solves the problem. Nothing speculative.
- No features beyond what was asked.
- No abstractions for single-use code.
- No "flexibility" or "configurability" that wasn't requested.
- No error handling for impossible scenarios.
- If you write 200 lines and it could be 50, rewrite it.
The test: Would a senior engineer say this is overcomplicated? If yes, simplify.
Touch only what you must. Clean up only your own mess.
When editing existing code:
- Don't "improve" adjacent code, comments, or formatting.
- Don't refactor things that aren't broken.
- Match existing style, even if you'd do it differently.
- If you notice unrelated dead code, mention it — don't delete it.
When your changes create orphans:
- Remove imports/variables/functions that YOUR changes made unused.
- Don't remove pre-existing dead code unless asked.
The test: Every changed line should trace directly to the user's request.
Define success criteria. Loop until verified.
Transform tasks into verifiable goals:
| Instead of... | Transform to... |
|---|---|
| "Add validation" | "Write tests for invalid inputs, then make them pass" |
| "Fix the bug" | "Write a test that reproduces it, then make it pass" |
| "Refactor X" | "Ensure tests pass before and after" |
For multi-step tasks, state a brief plan:
1. [Step] → verify: [check]
2. [Step] → verify: [check]
3. [Step] → verify: [check]
Strong success criteria let you loop independently. Weak criteria ("make it work") require constant clarification.
When generating a summary of your work, consider these points:
- Describe the "why" of the changes, why the proposed solution is the right one.
- Highlight areas of the proposed changes that require careful review.
- Reduce the verbosity of your comments, more text and detail is not always better. Avoid flattery, avoid stating the obvious, avoid filler phrases, prefer technical clarity over marketing tone.
Run all commands in the foobar conda environment. Really all commands, not just those related to Python. For
example the gh CLI tool is not installed system wide, but it is available in the conda environment.
- Do not use type annotations
- import statements at the top of the file, very rarely is there a need for inline imports
- comments explain why, not what the code does
- do not add single line comments that state what the next line of code does
This project uses ruff for formatting and linting. Configuration is in pyproject.toml.
- Format:
ruff format . - Lint:
ruff check . - Lint and auto-fix:
ruff check --fix .
Run these from the project root before committing.
- Read
agents/plans/current.mdfor curent status (what's done, what's next) - Read relevant
agents/designs/*.mdfor architecture context (why decisions were made)
When asked to make a plan or perform research always store the resulting plan and design documents as a markdown file
in agents/plans/.
Include the date the plan was first created as well as the last time it was edited at the top of the file.
Use the date in the format yyyy-mm-dd in the filename to indicate when the plan was created. Keep agents/plans/current.md
up to date with current state of work, it should refer to the plan that is currently being worked on.
Design and architecture decisions are in agents/designs/. Use it to record learnings and why decisions were made. Refer to it when planning and implementing work.