| name | description | allowed-tools |
|---|---|---|
linear-story-quality |
Validate Linear issue quality for AI-autonomous execution. Use when assessing story readiness, checking issue quality, validating before execution, or when about to start work on a Linear issue. Also use after drafting a new Linear issue to self-check before saving. Enforces hard gates (objective, acceptance criteria, affected areas) and scores soft criteria (edge cases, scope, dependencies, approach). |
Read, Glob, Grep, mcp__plugin_linear_linear__get_issue, mcp__plugin_linear_linear__save_issue, mcp__plugin_linear_linear__list_issues |
Validate that Linear issues have sufficient detail for confident AI-autonomous execution.
- After drafting a new story — self-check before calling
save_issue - Before starting execution — validate an existing issue fetched via
get_issue - On request — when the user asks to check story quality or readiness
Lane separation with user-story-writer: That skill handles writing stories (format, templates, Gherkin). This skill handles validating quality. If both apply, run this skill after the story is written.
For new stories (Flow 1 — Creation):
- You have just drafted the story content. Proceed to Step 2.
For existing stories (Flow 2 — Pre-Execution):
- Fetch the issue using
get_issuewith the issue ID. - If the issue has a
parentId, also fetch the parent issue for context (sub-tasks get relaxed H3 evaluation). - If the issue has no description (title-only), skip to Step 3 and report all hard gates as FAIL.
Check the current project's CLAUDE.md for Linear-related configuration:
- Team name (e.g., "Hivemine")
- Issue prefix (e.g., "APX")
- Assignee convention (e.g., "me")
This context helps populate the assessment output. If absent, proceed without it — the quality rubric works universally.
Evaluate each hard gate as PASS or FAIL. All three must pass or the story is BLOCKED.
Check: Does the story state a concrete outcome — what changes and why?
- PASS: The story describes a specific, observable change and the motivation behind it. Examples: "Detect quota exhaustion errors and trip a circuit breaker" or "Replace build-time OIDC config with runtime config from /config endpoint."
- FAIL: The objective is vague, missing, or just restates the title. Examples: "Update the enrichment stuff," "Fix the bug," "Improve performance."
Check: Does the story have at least 2 specific, testable conditions for "done"?
- PASS: Two or more concrete, verifiable conditions exist. They can be in a dedicated "Acceptance Criteria" section, or embedded as clearly testable statements in the description. Examples: "Circuit breaker trips after first quota error," "Blocked calls return immediately without hitting the API."
- FAIL: No acceptance criteria, fewer than 2, or criteria are vague/untestable. Examples: "It works," "Performance is better," a single criterion like "The feature is complete."
Check: Does the story name at least one specific package, module, or component? (Not just "backend" or "frontend.")
- PASS: References specific code locations. Examples: "internal/safeguards/", "Angular leads feature store," "enrichment Service waterfall loop."
- FAIL: No mention of where code changes happen, or only says "backend" or "the API."
- Sub-task exception: If the issue has a
parentId, H3 passes if the parent issue specifies affected areas, even if the sub-task itself doesn't.
Score each criterion 0, 1, or 2. Sum for total (0-8).
| Score | Definition |
|---|---|
| 0 | No edge cases mentioned |
| 1 | 1 edge case identified |
| 2 | 2+ edge cases with handling approach described |
| Score | Definition |
|---|---|
| 0 | No mention of what's out of scope |
| 1 | Scope is implicit from context |
| 2 | Explicit "Out of Scope" section or clear boundary statements |
| Score | Definition |
|---|---|
| 0 | No mention of related issues |
| 1 | References related issues loosely (e.g., "similar to the work in APX-75") |
| 2 | Links specific issue IDs with blocking/blocked-by relationships |
| Score | Definition |
|---|---|
| 0 | No implementation direction provided |
| 1 | General direction stated (e.g., "use Redis caching") |
| 2 | Specific approach with key design decisions (e.g., "CachedClient wrapping Searcher interface, SHA-256 keys, 24h TTL") |
- If any hard gate is FAIL → verdict is BLOCKED
- Else if soft score is 0-3 → verdict is NEEDS WORK
- Else if soft score is 4-5 → verdict is ACCEPTABLE
- Else if soft score is 6-8 → verdict is STRONG
Output the assessment in this exact format:
## Story Quality Assessment: <ISSUE-ID>
### Hard Gates
- [PASS/FAIL] Clear Objective: <1-line summary>
- [PASS/FAIL] Acceptance Criteria: <count found, or what's missing>
- [PASS/FAIL] Affected Areas: <what was found, or what's missing>
### Soft Criteria (<total>/8)
- Edge Cases: <score>/2 — <brief justification>
- Scope Boundaries: <score>/2 — <brief justification>
- Dependencies: <score>/2 — <brief justification>
- Technical Approach: <score>/2 — <brief justification>
### Verdict: <BLOCKED|NEEDS WORK|ACCEPTABLE|STRONG>
<If BLOCKED: list which hard gates failed and what's needed to fix them>
<If NEEDS WORK: list which soft criteria could be improved>
<If ACCEPTABLE/STRONG: note any optional improvements>
Flow 1 (Story Creation):
- BLOCKED: Fix the gaps in the draft, re-run the rubric, then show the updated assessment.
- NEEDS WORK: Show the assessment, ask the user if they want to fill gaps or proceed.
- ACCEPTABLE/STRONG: Show the assessment, ask for approval to save to Linear.
Flow 2 (Pre-Execution):
- BLOCKED: Tell the user which gaps need filling. Offer to enrich the story from codebase context or ask clarifying questions. After enrichment, re-run the rubric.
- NEEDS WORK: Show the assessment, ask if the user wants to proceed or enrich first.
- ACCEPTABLE/STRONG: Proceed to execution.