Status: Draft
Author: @bdougie
Date: 2026-02-15
Automatically generate reusable "skills" from Tapes session data—extracting patterns, successful prompts, and workflows that can be packaged as agent capabilities, prompt templates, or MCP tools.
Every coding session with an AI agent contains implicit knowledge:
- Prompts that worked: Specific phrasings that got good results
- Workflows: Multi-step patterns for common tasks (debug → fix → test)
- Domain expertise: Project-specific context that improves responses
- Tool sequences: Effective combinations of tool calls
This knowledge currently lives in session history and dies there. Teams repeat the same discovery process. Skill generation extracts this tacit knowledge and makes it reusable.
Use cases:
- "Generate a skill from my best debugging sessions"
- "What patterns do my successful refactoring sessions share?"
- "Create a project-specific prompt template from past sessions"
- "Package this workflow as an MCP tool for the team"
A skill is a packaged, reusable unit of agent capability:
# ~/.tapes/skills/debug-react-hooks.yaml
name: debug-react-hooks
version: 1.0.0
description: Systematic approach to debugging React hook issues
generated_from:
sessions: [abc123, def456, ghi789]
extracted_at: 2026-02-15T18:00:00Z
# The core prompt/instruction
prompt: |
When debugging React hook issues, follow this systematic approach:
1. **Identify the hook**: Which hook is misbehaving? (useState, useEffect, useMemo, custom)
2. **Check dependencies**: For useEffect/useMemo/useCallback, verify the dependency array
3. **Trace the render cycle**: Add console.logs to track when the component renders
4. **Look for stale closures**: Common with useEffect callbacks referencing state
5. **Check for missing cleanup**: useEffect cleanup functions for subscriptions/timers
Common patterns from past sessions:
- Infinite loops often come from objects/arrays in dependency arrays
- Stale state in event handlers → use functional updates or refs
- Memory leaks from unsubscribed effects
# Optional: specific examples from sessions
examples:
- input: "My useEffect runs infinitely"
output: "Check if you're creating new object/array references in the dependency array..."
source_session: abc123
# Optional: suggested tool sequence
workflow:
- tool: read_file
description: "Read the component with the problematic hook"
- tool: grep_search
description: "Search for other usages of the same state"
- tool: edit_file
description: "Apply the fix"
- tool: bash
description: "Run tests to verify"
# Metadata
tags: [react, hooks, debugging, frontend]
author: bdougie
team: paper-compute
stats:
times_used: 0
success_rate: nullExtracted phrasings and structures that get good results:
type: prompt-template
name: explain-code-change
prompt: |
Explain this code change in the style of a PR review:
- What problem does it solve?
- What's the approach?
- Any potential issues?
```diff
{{diff}}variables:
- name: diff type: string description: The git diff to explain
### 2. Workflows
Multi-step patterns with tool sequences:
```yaml
type: workflow
name: add-feature-with-tests
steps:
- name: understand
prompt: "Read the codebase structure and understand where this feature belongs"
tools: [read_file, list_files, grep_search]
- name: implement
prompt: "Implement the feature following existing patterns"
tools: [edit_file, create_file]
- name: test
prompt: "Write tests for the new feature"
tools: [create_file, edit_file]
- name: verify
prompt: "Run tests and fix any failures"
tools: [bash]
Project-specific context that improves responses:
type: domain-knowledge
name: contributor-info-context
project: contributor.info
context: |
contributor.info is a Next.js app that analyzes GitHub contributors.
Key directories:
- /app - Next.js app router pages
- /components - React components (use shadcn/ui)
- /lib - Utility functions and API clients
- /hooks - Custom React hooks
Conventions:
- Use TypeScript strict mode
- Prefer server components, use 'use client' only when needed
- API routes in /app/api use edge runtime
Common pitfalls:
- GitHub API rate limits: use authenticated requests
- Large repos: paginate contributor queriesPackage a skill as an MCP tool for programmatic use:
type: mcp-tool
name: analyze_pr
description: Analyze a pull request for potential issues
input_schema:
type: object
properties:
pr_url:
type: string
description: GitHub PR URL
required: [pr_url]
implementation:
prompt: |
Analyze this PR for:
1. Potential bugs or edge cases
2. Performance implications
3. Security concerns
4. Test coverage gaps
PR: {{pr_url}}Select sessions to learn from:
# Generate skill from specific sessions
tapes skill generate --from abc123,def456,ghi789
# Generate from search results
tapes search "debugging hooks" --status completed | tapes skill generate
# Generate from successful sessions in a project
tapes skill generate --project contributor.info --status completed --min-cost 0.10
# Interactive selection
tapes skill generate -iSelection criteria:
- Completed sessions (not abandoned)
- Minimum engagement (cost > $0.10 indicates substantive work)
- Positive signals: task completed, no error loops
- Optional: user rating/feedback
Use an LLM to analyze sessions and extract patterns:
type PatternExtractor struct {
model string // claude-sonnet-4 recommended
sessions []SessionDetail
}
type ExtractedPatterns struct {
// Common prompt structures that worked
EffectivePrompts []PromptPattern `json:"effective_prompts"`
// Multi-step workflows
Workflows []WorkflowPattern `json:"workflows"`
// Domain knowledge mentioned
DomainContext []string `json:"domain_context"`
// Tool usage patterns
ToolSequences []ToolSequence `json:"tool_sequences"`
// Anti-patterns (what didn't work)
Pitfalls []string `json:"pitfalls"`
}
func (e *PatternExtractor) Extract(ctx context.Context) (*ExtractedPatterns, error) {
prompt := `Analyze these coding sessions and extract reusable patterns.
Sessions:
{{range .Sessions}}
---
Project: {{.Summary.Project}}
Model: {{.Summary.Model}}
Duration: {{.Summary.Duration}}
Status: {{.Summary.Status}}
Messages:
{{range .Messages}}
[{{.Role}}]: {{.Text}}
{{if .ToolCalls}}Tools: {{.ToolCalls}}{{end}}
{{end}}
---
{{end}}
Extract:
1. Effective prompt patterns (phrasings/structures that got good results)
2. Multi-step workflows (sequences of actions that accomplished goals)
3. Domain knowledge (project-specific context that was useful)
4. Tool sequences (effective combinations of tool calls)
5. Anti-patterns (approaches that failed or required correction)
Output as JSON matching the ExtractedPatterns schema.`
// Call LLM with sessions
return e.llm.Generate(ctx, prompt, e.sessions)
}Convert extracted patterns into skill definitions:
type SkillAssembler struct {
patterns *ExtractedPatterns
metadata SkillMetadata
}
func (a *SkillAssembler) Assemble() (*Skill, error) {
skill := &Skill{
Name: a.metadata.Name,
Description: a.synthesizeDescription(),
Prompt: a.buildPrompt(),
Examples: a.selectExamples(),
Workflow: a.buildWorkflow(),
Tags: a.inferTags(),
GeneratedFrom: GenerationMetadata{
Sessions: a.metadata.SessionIDs,
ExtractedAt: time.Now(),
Patterns: a.patterns,
},
}
return skill, nil
}# Preview generated skill
tapes skill generate --from abc123 --preview
# Edit before saving
tapes skill generate --from abc123 --edit
# Test skill against a new session
tapes skill test debug-react-hooks --session xyz789# Generate a skill from sessions
tapes skill generate [--from <session-ids>] [--name <name>]
[--type prompt|workflow|domain|mcp]
[--project <project>] [--since <duration>]
[--preview] [--edit] [-o <output-file>]
# List skills
tapes skill list [--type <type>] [--tag <tag>]
# Show skill details
tapes skill show <name>
# Use a skill (inject into session context)
tapes chat --skill debug-react-hooks
# Export skill
tapes skill export <name> -o skill.yaml
# Import skill
tapes skill import skill.yaml
# Share skill to team
tapes skill share <name>
# Delete skill
tapes skill delete <name>
# Test skill against sessions
tapes skill test <name> --session <id>┌─ Deck ──────────────────────────────────────────────────────────┐
│ [Sessions] [Shared] [Skills] [Team] │
├─────────────────────────────────────────────────────────────────┤
│ 🔍 Search skills... [+ Generate New Skill] │
├─────────────────────────────────────────────────────────────────┤
│ MY SKILLS │
│ ┌─────────────────────────────────────────────────────────────┐ │
│ │ 🛠 debug-react-hooks workflow │ │
│ │ Systematic approach to debugging React hook issues │ │
│ │ Generated from 3 sessions • Used 12 times • 85% success │ │
│ │ [Use] [Edit] [Share] [Delete] │ │
│ └─────────────────────────────────────────────────────────────┘ │
│ ┌─────────────────────────────────────────────────────────────┐ │
│ │ 📝 contributor-info-context domain │ │
│ │ Project context for contributor.info │ │
│ │ Auto-generated • Updated 2 days ago │ │
│ └─────────────────────────────────────────────────────────────┘ │
│ │
│ TEAM SKILLS │
│ ┌─────────────────────────────────────────────────────────────┐ │
│ │ 🛠 api-error-handling workflow │ │
│ │ Shared by alice • 3 days ago │ │
│ └─────────────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────┘
┌─ Generate Skill ────────────────────────────────────────────────┐
│ │
│ Step 1: Select Sessions │
│ ┌─────────────────────────────────────────────────────────────┐ │
│ │ ☑ Feb 15 - Debug useEffect infinite loop $0.42 │ │
│ │ ☑ Feb 12 - Fix useState stale closure $0.31 │ │
│ │ ☑ Feb 10 - React hooks performance issue $0.28 │ │
│ │ ☐ Feb 8 - Unrelated refactoring $0.15 │ │
│ └─────────────────────────────────────────────────────────────┘ │
│ │
│ 3 sessions selected • Estimated patterns: 5-8 │
│ │
│ [Cancel] [Next: Extract Patterns] │
└─────────────────────────────────────────────────────────────────┘
┌─ Generate Skill ────────────────────────────────────────────────┐
│ │
│ Step 2: Review Extracted Patterns │
│ ┌─────────────────────────────────────────────────────────────┐ │
│ │ EFFECTIVE PROMPTS │ │
│ │ • "Check the dependency array for object references" │ │
│ │ • "Add console.log before and after the hook" │ │
│ │ │ │
│ │ WORKFLOW │ │
│ │ 1. Identify the problematic hook │ │
│ │ 2. Check dependency arrays │ │
│ │ 3. Add debugging logs │ │
│ │ 4. Apply fix and verify │ │
│ │ │ │
│ │ PITFALLS FOUND │ │
│ │ • Don't use objects directly in deps (use useMemo) │ │
│ └─────────────────────────────────────────────────────────────┘ │
│ │
│ [Back] [Edit Patterns] [Next: Name & Save] │
└─────────────────────────────────────────────────────────────────┘
~/.tapes/
├── skills/
│ ├── debug-react-hooks.yaml
│ ├── contributor-info-context.yaml
│ └── api-error-handling.yaml
├── skills.db # SQLite index for search
└── ...
CREATE TABLE skills (
id TEXT PRIMARY KEY,
name TEXT UNIQUE NOT NULL,
type TEXT NOT NULL, -- 'prompt', 'workflow', 'domain', 'mcp'
description TEXT,
prompt TEXT,
workflow JSON,
examples JSON,
tags JSON,
-- Generation metadata
source_sessions JSON, -- array of session IDs
generated_at TIMESTAMP,
generated_by TEXT,
-- Usage tracking
use_count INTEGER DEFAULT 0,
success_count INTEGER DEFAULT 0,
last_used_at TIMESTAMP,
-- Sharing
team_id TEXT,
shared_at TIMESTAMP,
shared_by TEXT,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATE INDEX idx_skills_type ON skills(type);
CREATE INDEX idx_skills_team ON skills(team_id);Export skills directly to agent configuration files:
# Sync skill to Claude Code (CLAUDE.md)
tapes skill sync debug-react-hooks --agent claude
# Sync to Codex (AGENTS.md)
tapes skill sync debug-react-hooks --agent codex
# Sync to OpenCode
tapes skill sync debug-react-hooks --agent opencode
# Sync to all detected agents
tapes skill sync debug-react-hooks --all
# Auto-sync on skill creation
tapes skill generate --from abc123 --sync claudeHow it works:
Each agent has a config file in the project root:
| Agent | Config File | Format |
|---|---|---|
| Claude Code | CLAUDE.md |
Markdown |
| Codex | AGENTS.md |
Markdown |
| OpenCode | .opencode/config.yaml |
YAML |
| Cursor | .cursorrules |
Text |
Tapes appends a managed section to these files:
<!-- CLAUDE.md -->
# Project Guidelines
... existing content ...
---
<!-- tapes:skills:start - DO NOT EDIT THIS SECTION MANUALLY -->
## Tapes Skills
### debug-react-hooks
Systematic approach to debugging React hook issues:
1. Identify the problematic hook
2. Check dependency arrays for object/array references
3. Add console.logs to trace render cycle
4. Look for stale closures in callbacks
5. Verify cleanup functions in useEffect
**Common fixes:**
- Use `useMemo` for object dependencies
- Use functional updates for state: `setState(prev => ...)`
- Add cleanup: `return () => unsubscribe()`
<!-- tapes:skills:end -->Config commands:
# Set default agents for this project
tapes config set agents claude,codex
# Auto-sync all skills to configured agents
tapes skill sync --all
# Remove skill from agent configs
tapes skill unsync debug-react-hooks
# Preview what would be written
tapes skill sync debug-react-hooks --dry-runProject-level config (.tapes/config.yaml):
# Auto-sync skills to these agents
agents:
- claude
- codex
# Skills enabled for this project
skills:
- debug-react-hooks
- contributor-info-context
# Auto-generate domain knowledge skill
auto_domain_skill: true# Start chat with skill context
tapes chat --skill debug-react-hooks
# Multiple skills
tapes chat --skill debug-react-hooks --skill contributor-info-contextSkill prompts are injected into the system context.
Skills with type: mcp-tool are exposed via the MCP server:
{
"tools": [
{
"name": "debug_react_hooks",
"description": "Systematic approach to debugging React hook issues",
"inputSchema": { ... }
}
]
}When starting a session in a known project, suggest relevant skills:
Starting session in contributor.info...
💡 Suggested skills:
• contributor-info-context (domain knowledge)
• nextjs-debugging (workflow)
Press [s] to enable suggestions, [Enter] to skip
Track skill usage and refine:
type SkillFeedback struct {
SkillID string
SessionID string
Helpful bool
Notes string
}
// After session with skill, prompt for feedback
// Use feedback to refine skill over time- Skill YAML schema and parser
-
tapes skill generatecommand - Pattern extraction via LLM
- Local skill storage (~/.tapes/skills/)
-
tapes skill list/show/delete
-
tapes skill synccommand - Claude Code (CLAUDE.md) writer
- Codex (AGENTS.md) writer
- OpenCode config writer
- Cursor (.cursorrules) writer
- Managed section markers (tapes:skills:start/end)
-
--dry-runpreview mode - Project config (.tapes/config.yaml)
-
tapes chat --skillintegration - Skills tab in deck UI
- Generation wizard in deck
- Usage tracking
-
tapes skill shareto team - Team skills view
- MCP tool generation
- Skill import/export
- Auto-suggest skills based on project
- Skill improvement from feedback
- Automatic skill generation from high-quality sessions
- Skill versioning and diff
- Generation model: Which model for pattern extraction? (claude-sonnet-4 recommended for quality)
- Auto-generation: Should skills be auto-generated from successful sessions, or always manual?
- Skill scope: Project-specific vs. general-purpose skills?
- Versioning: How to handle skill updates while preserving history?
- Quality gate: Minimum sessions/success rate before skill is considered "validated"?
- Skills may contain sensitive patterns or domain knowledge
- Team-shared skills should be reviewable before use
- Option to redact specific content from generated skills
- Skills from untrusted sources should be sandboxed