Skip to content

Instantly share code, notes, and snippets.

@metaldrummer610
Created March 20, 2026 16:14
Show Gist options
  • Select an option

  • Save metaldrummer610/f6dadc83eaa7a6cf7ec8bc3b53560b51 to your computer and use it in GitHub Desktop.

Select an option

Save metaldrummer610/f6dadc83eaa7a6cf7ec8bc3b53560b51 to your computer and use it in GitHub Desktop.
Linear Story Quality - Claude Code Skill for validating Linear issues have sufficient detail for AI-autonomous execution
name description allowed-tools
linear-story-quality
Validate Linear issue quality for AI-autonomous execution. Use when assessing story readiness, checking issue quality, validating before execution, or when about to start work on a Linear issue. Also use after drafting a new Linear issue to self-check before saving. Enforces hard gates (objective, acceptance criteria, affected areas) and scores soft criteria (edge cases, scope, dependencies, approach).
Read, Glob, Grep, mcp__plugin_linear_linear__get_issue, mcp__plugin_linear_linear__save_issue, mcp__plugin_linear_linear__list_issues

Linear Story Quality

Validate that Linear issues have sufficient detail for confident AI-autonomous execution.

When to Use

  1. After drafting a new story — self-check before calling save_issue
  2. Before starting execution — validate an existing issue fetched via get_issue
  3. On request — when the user asks to check story quality or readiness

Lane separation with user-story-writer: That skill handles writing stories (format, templates, Gherkin). This skill handles validating quality. If both apply, run this skill after the story is written.

Instructions

Step 1: Get the Story Content

For new stories (Flow 1 — Creation):

  • You have just drafted the story content. Proceed to Step 2.

For existing stories (Flow 2 — Pre-Execution):

  • Fetch the issue using get_issue with the issue ID.
  • If the issue has a parentId, also fetch the parent issue for context (sub-tasks get relaxed H3 evaluation).
  • If the issue has no description (title-only), skip to Step 3 and report all hard gates as FAIL.

Step 2: Read Project Config

Check the current project's CLAUDE.md for Linear-related configuration:

  • Team name (e.g., "Hivemine")
  • Issue prefix (e.g., "APX")
  • Assignee convention (e.g., "me")

This context helps populate the assessment output. If absent, proceed without it — the quality rubric works universally.

Step 3: Evaluate Hard Gates

Evaluate each hard gate as PASS or FAIL. All three must pass or the story is BLOCKED.

H1: Clear Objective

Check: Does the story state a concrete outcome — what changes and why?

  • PASS: The story describes a specific, observable change and the motivation behind it. Examples: "Detect quota exhaustion errors and trip a circuit breaker" or "Replace build-time OIDC config with runtime config from /config endpoint."
  • FAIL: The objective is vague, missing, or just restates the title. Examples: "Update the enrichment stuff," "Fix the bug," "Improve performance."

H2: Acceptance Criteria

Check: Does the story have at least 2 specific, testable conditions for "done"?

  • PASS: Two or more concrete, verifiable conditions exist. They can be in a dedicated "Acceptance Criteria" section, or embedded as clearly testable statements in the description. Examples: "Circuit breaker trips after first quota error," "Blocked calls return immediately without hitting the API."
  • FAIL: No acceptance criteria, fewer than 2, or criteria are vague/untestable. Examples: "It works," "Performance is better," a single criterion like "The feature is complete."

H3: Affected Areas

Check: Does the story name at least one specific package, module, or component? (Not just "backend" or "frontend.")

  • PASS: References specific code locations. Examples: "internal/safeguards/", "Angular leads feature store," "enrichment Service waterfall loop."
  • FAIL: No mention of where code changes happen, or only says "backend" or "the API."
  • Sub-task exception: If the issue has a parentId, H3 passes if the parent issue specifies affected areas, even if the sub-task itself doesn't.

Step 4: Score Soft Criteria

Score each criterion 0, 1, or 2. Sum for total (0-8).

S1: Edge Cases

Score Definition
0 No edge cases mentioned
1 1 edge case identified
2 2+ edge cases with handling approach described

S2: Scope Boundaries

Score Definition
0 No mention of what's out of scope
1 Scope is implicit from context
2 Explicit "Out of Scope" section or clear boundary statements

S3: Dependencies

Score Definition
0 No mention of related issues
1 References related issues loosely (e.g., "similar to the work in APX-75")
2 Links specific issue IDs with blocking/blocked-by relationships

S4: Technical Approach

Score Definition
0 No implementation direction provided
1 General direction stated (e.g., "use Redis caching")
2 Specific approach with key design decisions (e.g., "CachedClient wrapping Searcher interface, SHA-256 keys, 24h TTL")

Step 5: Determine Verdict

  1. If any hard gate is FAIL → verdict is BLOCKED
  2. Else if soft score is 0-3 → verdict is NEEDS WORK
  3. Else if soft score is 4-5 → verdict is ACCEPTABLE
  4. Else if soft score is 6-8 → verdict is STRONG

Step 6: Output Assessment

Output the assessment in this exact format:

## Story Quality Assessment: <ISSUE-ID>

### Hard Gates
- [PASS/FAIL] Clear Objective: <1-line summary>
- [PASS/FAIL] Acceptance Criteria: <count found, or what's missing>
- [PASS/FAIL] Affected Areas: <what was found, or what's missing>

### Soft Criteria (<total>/8)
- Edge Cases: <score>/2 — <brief justification>
- Scope Boundaries: <score>/2 — <brief justification>
- Dependencies: <score>/2 — <brief justification>
- Technical Approach: <score>/2 — <brief justification>

### Verdict: <BLOCKED|NEEDS WORK|ACCEPTABLE|STRONG>
<If BLOCKED: list which hard gates failed and what's needed to fix them>
<If NEEDS WORK: list which soft criteria could be improved>
<If ACCEPTABLE/STRONG: note any optional improvements>

Step 7: Act on Verdict

Flow 1 (Story Creation):

  • BLOCKED: Fix the gaps in the draft, re-run the rubric, then show the updated assessment.
  • NEEDS WORK: Show the assessment, ask the user if they want to fill gaps or proceed.
  • ACCEPTABLE/STRONG: Show the assessment, ask for approval to save to Linear.

Flow 2 (Pre-Execution):

  • BLOCKED: Tell the user which gaps need filling. Offer to enrich the story from codebase context or ask clarifying questions. After enrichment, re-run the rubric.
  • NEEDS WORK: Show the assessment, ask if the user wants to proceed or enrich first.
  • ACCEPTABLE/STRONG: Proceed to execution.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment