Right. Focus on injection mechanics and why it works, not what skills exist.
Not "here's all documentation upfront" - it's pull, not push:
Session starts → Minimal bootstrap injected
Agent encounters task → Agent searches for skill
Agent finds match → Agent reads SKILL.md
Agent uses skill → Follows instructions
Key: Context only loaded when needed. Main context stays clean.
Not documentation. Not guidelines. Executable process:
---
name: Thing To Do
when_to_use: Specific trigger conditions
---
# Overview
Core principle in 1-2 sentences
# Quick Reference
Pattern matching
# Implementation
Step-by-step checklist
# Supporting files
Tools/scripts if neededWhy it works:
- Frontmatter: Machine-parseable metadata
- Checklist: Unambiguous steps (→ TodoWrite)
when_to_use: Clear trigger conditions- Markdown: LLM-native format, easy to read/write
Skills tested against realistic failure modes:
Write skill → Test on subagent → Find ways subagent skips it
→ Strengthen language → Retest → Iterate until bulletproof
Uses persuasion principles not as hack but as reliability mechanism:
- Authority framing makes instructions stick
- Commitment patterns force acknowledgment
- Scarcity/pressure scenarios ensure compliance under stress
Result: Skills that survive adversarial testing actually work in practice.
Bootstrap establishes operating rules:
1. You have skills
2. Skills are discoverable (search tool)
3. If skill exists for task → YOU MUST USE IT
Not "helpful docs" - it's required procedure. Like aviation checklists.
Reinforced by:
- EXTREMELY_IMPORTANT framing
- Mandatory announcement ("I'm using X skill...")
- TodoWrite tracking for checklists
- Tested scenarios where agent tries to skip → fails test → skill strengthened
Two-phase discovery:
Phase 1: Search
find-skills [pattern] # Returns list with when_to_useAgent matches pattern to task.
Phase 2: Read & Execute
Read skill → Announce usage → Follow checklistWhy separation matters:
- Search is cheap (grep/metadata)
- Read is expensive (full file in context)
- Only read what's needed
Meta-skill that writes skills:
Agent learns pattern → Extracts to SKILL.md → Tests on subagents
→ If passes: Add to library → Future agents use it
Genius part: System gets smarter over time. Lessons codified, not lost.
Don't pollute main context with exploration:
Need to search? → Dispatch subagent with query
→ Subagent returns synthesis only
→ Main agent uses result
Principle: Keep main context for active work. Delegate research/search.
- RAG: Retrieves chunks, agent figures out what to do
- Skills: Retrieves instructions, agent executes
- System prompt: Fixed, everything upfront, context bloat
- Skills: Dynamic, loaded on-demand, scales indefinitely
- Docs: Reference material, agent interprets
- Skills: Executable procedures, agent follows
Skills are code for LLMs.
Not data. Not documentation. Executable procedures that:
- Load on-demand (JIT)
- Have clear triggers (
when_to_use) - Provide unambiguous steps (checklists)
- Are tested adversarially (TDD for docs)
- Use persuasion principles for reliability
- Self-improve (meta-skill writes skills)
The injection system:
- Minimal bootstrap (establish rules)
- Search tool (discovery)
- SKILL.md format (standard interface)
- Mandatory compliance (enforcement)
- Adversarial testing (quality gate)
Result: System that scales because context cost is O(1) per task, not O(n) for all knowledge.
Evaluation: claude-skills vs Skills Pattern
What claude-skills Does Well
✅ SKILL.md Format:
✅ Executable Workflows:
✅ Quality Standards:
✅ Structure:
skills/public/
├── docx/
│ ├── SKILL.md # Main entry point
│ ├── ooxml.md # Deep dive
│ └── docx-js.md # Tool reference
├── pdf/
│ ├── SKILL.md
│ ├── REFERENCE.md
│ └── FORMS.md
What's Missing from Skills Pattern
❌ No Discovery System:
❌ No Mandatory Compliance:
❌ No Meta-Layer:
❌ No Context Management:
❌ Wrong Abstraction Level:
Key Distinction
claude-skills = Reference Documentation
"Here's how to use docx library"
"Here's how to extract PDF tables"
"Here's formula syntax"
→ Agent interprets, figures out what to do
superpowers skills = Executable Procedures
"When testing, follow TDD: RED-GREEN-REFACTOR"
"When stuck, run find-skills, read match, use it"
"When brainstorming, ask questions before coding"
→ Agent executes, doesn't need to figure it out
What It Actually Is
Claude.ai's internal tool documentation extracted from /mnt/skills/public/.
Purpose: Enable Office doc manipulation via Claude.ai interface
But it's NOT:
Could It Be Converted?
Yes, with restructuring:
Instead of:
xlsx/SKILL.md: "How to use openpyxl and pandas"
Make it:
skills/data-analysis/financial-modeling/SKILL.md:
when_to_use: Building financial models with projections
Implementation
Supporting: ${SKILLS_ROOT}/tools/xlsx/...
Pattern:
Bottom Line
claude-skills: Well-structured tool reference docs using SKILL.md format
Skills pattern: Self-improving, JIT-injected, adversarially-tested, mandatory-compliance executable procedures
Claude-skills uses the format but misses the pattern mechanics that make superpowers actually work.
It's good reference material. Not a skills system.