Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Select an option

  • Save henu-wang/0c0adad3df1cfb513f5c02ee4ecbfc68 to your computer and use it in GitHub Desktop.

Select an option

Save henu-wang/0c0adad3df1cfb513f5c02ee4ecbfc68 to your computer and use it in GitHub Desktop.
AI Prompt Engineering Cheat Sheet for Coding Agents — CRAFT framework, patterns, and quick reference

AI Prompt Engineering Cheat Sheet for Coding Agents

A practical reference for writing effective prompts when working with AI coding agents like Claude Code, Cursor, and GitHub Copilot.

The CRAFT Framework

Context -- Role -- Action -- Format -- Tone

Context: "In our Next.js 14 app using App Router and TypeScript..."
Role:    "Act as a senior backend engineer..."
Action:  "Refactor the authentication middleware to..."
Format:  "Return the code with inline comments explaining changes..."
Tone:    "Be concise, skip obvious explanations..."

Prompt Patterns That Work

1. Constraint-First Pattern

Put constraints before the task. Agents process instructions sequentially.

# Good
"Use only standard library. No external dependencies. 
Write a CSV parser that handles quoted fields."

# Weak
"Write a CSV parser. Oh, and don't use any external deps."

2. Example-Driven Pattern

Show the agent what you want with input/output pairs.

"Convert these API responses to TypeScript interfaces.

Example input:  { "user": { "name": "string", "age": 0, "tags": ["string"] } }
Example output: interface User { name: string; age: number; tags: string[]; }

Now convert this: { "order": { "id": "string", "items": [...] } }"

3. Step-by-Step Decomposition

Break complex tasks into numbered steps.

"1. Read the current database schema from prisma/schema.prisma
 2. Identify all models that reference the User model
 3. Add a 'deletedAt' soft-delete field to each
 4. Generate the migration
 5. Update the repository layer to filter soft-deleted records"

4. Negative Specification

Tell the agent what NOT to do.

"Optimize this SQL query.
- Do NOT change the table structure
- Do NOT use database-specific syntax (keep it portable)
- Do NOT add new indexes -- suggest them separately"

5. Role + Expertise Stacking

"You are a performance engineer who specializes in React rendering optimization.
Review this component tree and identify unnecessary re-renders."

Quick Reference Table

Goal Prompt Prefix
Debug "Find the bug in this code. Think step by step about the data flow..."
Refactor "Refactor for readability. Preserve all existing behavior..."
Explain "Explain this code to a mid-level developer. Focus on the 'why'..."
Test "Write tests covering edge cases. Include: empty input, null, boundary values..."
Optimize "Profile this function mentally. Where are the bottlenecks?..."
Security "Audit this endpoint for OWASP Top 10 vulnerabilities..."

Common Mistakes

  1. Too vague: "Make this better" -- better HOW? Faster? More readable? More secure?
  2. Too long: Walls of text get lost. Keep prompts under 200 words when possible.
  3. No context: The agent doesn't know your codebase conventions unless you tell it.
  4. Forgetting format: If you want a specific output shape, specify it explicitly.

Reusable Skill Files

Instead of retyping the same prompts, save them as skill files (.md files in your project's .claude/ or .cursor/ directory). This way the agent loads your conventions automatically.

For a library of ready-made prompts, skills, and workflow templates, check out TokRepo -- an open registry with 200+ curated AI assets that you can browse or search directly from your agent:

npx tokrepo search "code review prompt"
npx tokrepo search "react optimization skill"

Further Reading


William Wang -- Building TokRepo, an open registry for AI assets.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment