A practical reference for writing effective prompts when working with AI coding agents like Claude Code, Cursor, and GitHub Copilot.
Context -- Role -- Action -- Format -- Tone
Context: "In our Next.js 14 app using App Router and TypeScript..."
Role: "Act as a senior backend engineer..."
Action: "Refactor the authentication middleware to..."
Format: "Return the code with inline comments explaining changes..."
Tone: "Be concise, skip obvious explanations..."
Put constraints before the task. Agents process instructions sequentially.
# Good
"Use only standard library. No external dependencies.
Write a CSV parser that handles quoted fields."
# Weak
"Write a CSV parser. Oh, and don't use any external deps."
Show the agent what you want with input/output pairs.
"Convert these API responses to TypeScript interfaces.
Example input: { "user": { "name": "string", "age": 0, "tags": ["string"] } }
Example output: interface User { name: string; age: number; tags: string[]; }
Now convert this: { "order": { "id": "string", "items": [...] } }"
Break complex tasks into numbered steps.
"1. Read the current database schema from prisma/schema.prisma
2. Identify all models that reference the User model
3. Add a 'deletedAt' soft-delete field to each
4. Generate the migration
5. Update the repository layer to filter soft-deleted records"
Tell the agent what NOT to do.
"Optimize this SQL query.
- Do NOT change the table structure
- Do NOT use database-specific syntax (keep it portable)
- Do NOT add new indexes -- suggest them separately"
"You are a performance engineer who specializes in React rendering optimization.
Review this component tree and identify unnecessary re-renders."
| Goal | Prompt Prefix |
|---|---|
| Debug | "Find the bug in this code. Think step by step about the data flow..." |
| Refactor | "Refactor for readability. Preserve all existing behavior..." |
| Explain | "Explain this code to a mid-level developer. Focus on the 'why'..." |
| Test | "Write tests covering edge cases. Include: empty input, null, boundary values..." |
| Optimize | "Profile this function mentally. Where are the bottlenecks?..." |
| Security | "Audit this endpoint for OWASP Top 10 vulnerabilities..." |
- Too vague: "Make this better" -- better HOW? Faster? More readable? More secure?
- Too long: Walls of text get lost. Keep prompts under 200 words when possible.
- No context: The agent doesn't know your codebase conventions unless you tell it.
- Forgetting format: If you want a specific output shape, specify it explicitly.
Instead of retyping the same prompts, save them as skill files (.md files in your project's .claude/ or .cursor/ directory). This way the agent loads your conventions automatically.
For a library of ready-made prompts, skills, and workflow templates, check out TokRepo -- an open registry with 200+ curated AI assets that you can browse or search directly from your agent:
npx tokrepo search "code review prompt"
npx tokrepo search "react optimization skill"William Wang -- Building TokRepo, an open registry for AI assets.