Skip to content

Instantly share code, notes, and snippets.

@mpalpha
Last active January 21, 2026 08:40
Show Gist options
  • Select an option

  • Save mpalpha/32e02b911a6dae5751053ad158fc0e86 to your computer and use it in GitHub Desktop.

Select an option

Save mpalpha/32e02b911a6dae5751053ad158fc0e86 to your computer and use it in GitHub Desktop.
Learning Library MCP Server -A concise MCP server for capturing and retrieving universal working knowledge. Records meta-patterns about effective work with AI agents, tools, protocols, and processes - NOT task-specific code patterns.

Learning Library MCP Server

A concise MCP server for capturing and retrieving universal working knowledge. Records meta-patterns about effective work with AI agents, tools, protocols, and processes - NOT task-specific code patterns.

Author: Jason Lusk jason@jasonlusk.com License: MIT Gist URL: https://gist.github.com/mpalpha/32e02b911a6dae5751053ad158fc0e86

Features

  • Hybrid Storage: User-level + optional project-level SQLite databases
  • FTS5 Full-Text Search: Fast, ranked search across all learning fields
  • Automatic Deduplication: No duplicate patterns across databases
  • Zero Configuration: Works out-of-the-box with sensible defaults
  • Export Capabilities: JSON and Markdown export formats
  • Informed Reasoning: Multi-phase reasoning tool with context-aware guidance

Installation

Option 1: Install from Gist (Recommended)

# Clone from gist into your project
cd /path/to/your/project
mkdir -p .mcp-servers
cd .mcp-servers
git clone https://gist.github.com/mpalpha/32e02b911a6dae5751053ad158fc0e86 learning-library-mcp
cd learning-library-mcp
npm install
chmod +x index.js

Add to .mcp.json (project-level):

{
  "mcpServers": {
    "learning-library": {
      "command": "node",
      "args": [
        ".mcp-servers/learning-library-mcp/index.js"
      ],
      "env": {
        "LEARNING_LIBRARY_USER_DB": "${HOME}/.cursor/learning-library.db",
        "LEARNING_LIBRARY_PROJECT_DB": "${WORKSPACE_FOLDER}/.cursor/learning-library.db"
      }
    }
  }
}

Option 2: Install Globally

# Clone from gist to a global location
cd ~/.cursor
git clone https://gist.github.com/YOUR_GIST_ID learning-library-mcp
cd learning-library-mcp
npm install
chmod +x index.js

Add to ~/.claude.json (user-level):

{
  "mcpServers": {
    "learning-library": {
      "command": "node",
      "args": [
        "~/.cursor/learning-library-mcp/index.js"
      ],
      "env": {
        "LEARNING_LIBRARY_USER_DB": "${HOME}/.cursor/learning-library.db"
      }
    }
  }
}

Option 3: Local Development (Relative Path)

For projects that include the MCP server in their repository:

# From project root
mkdir -p learning-library-mcp
cd learning-library-mcp
# Copy index.js and package.json here
npm install
chmod +x index.js

Add to .mcp.json:

{
  "mcpServers": {
    "learning-library": {
      "command": "node",
      "args": [
        "learning-library-mcp/index.js"
      ],
      "env": {
        "LEARNING_LIBRARY_USER_DB": "${HOME}/.cursor/learning-library.db",
        "LEARNING_LIBRARY_PROJECT_DB": "${WORKSPACE_FOLDER}/.cursor/learning-library.db"
      }
    }
  }
}

Reload Claude Code

After configuration:

Cmd+Shift+P → "Developer: Reload Window"

Verify installation:

node learning-library-mcp/index.js --version
# Output: learning-library-mcp v1.0.0

Environment Variables

  • LEARNING_LIBRARY_USER_DB - User-level database path (default: ~/.cursor/learning-library.db)
  • LEARNING_LIBRARY_PROJECT_DB - Optional project-level database path
  • LEARNING_LIBRARY_MAX_RESULTS - Max search results (default: 50)

MCP Tools

1. informed_reasoning (Phase 2 - Active Reasoning)

NEW: Multi-phase reasoning with context-aware guidance. Replaces sequential thought with context synthesis from multiple sources.

Phases:

  1. analyze - Suggests queries based on available MCP tools
  2. integrate - Synthesizes context with priority-based merging
  3. reason - Evaluates thoughts against learned patterns
  4. record - Auto-captures learning from session

Parameters:

{
  phase: 'analyze' | 'integrate' | 'reason' | 'record',

  // ANALYZE phase
  problem?: string,
  availableTools?: string[],  // e.g., ['learning-library', 'context7', 'jira']

  // INTEGRATE phase
  gatheredContext?: {
    learnings?: any[],      // From search_learnings
    localDocs?: any[],      // From file reads
    mcpData?: {             // From other MCP servers
      jira?: any,
      figma?: any,
      context7?: any
    },
    webResults?: any[]
  },

  // REASON phase
  thought?: string,
  thoughtNumber?: number,
  totalThoughts?: number,
  nextThoughtNeeded?: boolean,
  isRevision?: boolean,
  revisesThought?: number,
  branchFromThought?: number,
  branchId?: string,
  needsMoreThoughts?: boolean,

  // RECORD phase
  finalConclusion?: string,
  relatedLearnings?: number[]  // IDs of related learnings
}

Example Workflow:

// 1. ANALYZE: Get suggestions for what to query
const analysis = await mcp__learning_library__informed_reasoning({
  phase: 'analyze',
  problem: 'How do I implement user authentication in React?',
  availableTools: ['learning-library', 'context7', 'github']
});
// Returns: { suggestedQueries: { learningLibrary: {...}, context7: {...}, ... } }

// 2. Execute suggested queries (user/AI does this)
const learnings = await mcp__learning_library__search_learnings({
  query: 'authentication React'
});
const docs = await mcp__context7__query_docs({
  libraryId: '/facebook/react',
  query: 'authentication patterns'
});

// 3. INTEGRATE: Synthesize context with token budget management
const context = await mcp__learning_library__informed_reasoning({
  phase: 'integrate',
  problem: 'How do I implement user authentication in React?',
  gatheredContext: {
    learnings,
    localDocs: [{ path: 'CLAUDE.md', content: '...' }],
    mcpData: { context7: docs }
  }
});
// Returns: { synthesizedContext: 'markdown text', estimatedThoughts: 5, tokenBudget: {...} }

// 4. REASON: Evaluate each thought against context
const evaluation1 = await mcp__learning_library__informed_reasoning({
  phase: 'reason',
  problem: 'How do I implement user authentication in React?',
  thought: 'I should use JWT tokens stored in localStorage',
  thoughtNumber: 1,
  totalThoughts: 5,
  nextThoughtNeeded: true
});
// Returns: { evaluation: {...}, guidance: '...', revisionSuggestion: {...} }

// 5. Continue reasoning with multiple thoughts...

// 6. RECORD: Capture learning when complete
const recorded = await mcp__learning_library__informed_reasoning({
  phase: 'record',
  problem: 'How do I implement user authentication in React?',
  finalConclusion: 'Use JWT tokens with httpOnly cookies, not localStorage',
  relatedLearnings: [123, 456]  // IDs from search results
});
// Returns: { recorded: true, learningId: 789, ... }

Key Features:

  • Dynamic Tool Discovery: Adapts suggestions based on availableTools parameter
  • Token Budget Management: 20K token limit with intelligent pruning
  • 4-Tier Context Prioritization: Project rules → Effective learnings → Anti-patterns → External data
  • Thought Evaluation: Detects scope creep, implementing-without-reading, protocol violations
  • Auto-Learning Capture: Automatically records reasoning sessions for future use

2. record_learning

Record a learning to the library.

Parameters:

{
  scope: 'user' | 'project',  // Default: 'user'
  type: 'effective' | 'ineffective',
  domain: 'Tools' | 'Protocol' | 'Communication' | 'Process' | 'Debugging' | 'Decision',
  situation: string,  // When this applies
  approach: string,   // What was done
  outcome: string,    // What happened
  reasoning: string,  // Why it worked/failed
  alternative?: string  // For ineffective: what to use instead
}

Example:

await mcp__learning_library__record_learning({
  scope: 'user',
  type: 'effective',
  domain: 'Protocol',
  situation: 'Before any file write operation',
  approach: 'Call mcp__protocol-enforcer__verify_protocol_compliance first',
  outcome: 'File operation succeeds without blocking',
  reasoning: 'Protocol enforcer requires authorization token before allowing writes'
});

2. search_learnings

Search learnings with FTS5 full-text search.

Parameters:

{
  query?: string,  // FTS5 full-text search (optional)
  domain?: 'Tools' | 'Protocol' | 'Communication' | 'Process' | 'Debugging' | 'Decision',
  type?: 'effective' | 'ineffective',
  limit?: number  // Default: 50
}

Example:

// Find all Protocol learnings
const protocolPatterns = await mcp__learning_library__search_learnings({
  domain: 'Protocol',
  type: 'effective'
});

// Full-text search
const reasoningRules = await mcp__learning_library__search_learnings({
  query: 'informed_reasoning mandatory'
});

3. export_learnings

Export learnings to JSON or Markdown.

Parameters:

{
  scope: 'user' | 'project',  // Default: 'user'
  format: 'json' | 'markdown',  // Default: 'json'
  domain?: 'Tools' | 'Protocol' | 'Communication' | 'Process' | 'Debugging' | 'Decision',
  type?: 'effective' | 'ineffective',
  filename?: string  // Auto-generated if not provided
}

Example:

// Export all user-level learnings as JSON
await mcp__learning_library__export_learnings({
  scope: 'user',
  format: 'json'
});

// Export project Protocol learnings as Markdown
await mcp__learning_library__export_learnings({
  scope: 'project',
  format: 'markdown',
  domain: 'Protocol'
});

Domain Categories

  • Tools: Tool selection and usage patterns
  • Protocol: Mandatory protocol and workflow requirements
  • Communication: User communication preferences and styles
  • Process: Standard operating procedures and workflows
  • Debugging: Debugging approaches and troubleshooting methods
  • Decision: Decision-making patterns and criteria

Database Schema

CREATE TABLE learnings (
  id INTEGER PRIMARY KEY AUTOINCREMENT,
  type TEXT NOT NULL CHECK(type IN ('effective', 'ineffective')),
  domain TEXT NOT NULL CHECK(domain IN ('Tools', 'Protocol', 'Communication', 'Process', 'Debugging', 'Decision')),
  situation TEXT NOT NULL,
  approach TEXT NOT NULL,
  outcome TEXT NOT NULL,
  reasoning TEXT NOT NULL,
  alternative TEXT,
  created_at TEXT DEFAULT CURRENT_TIMESTAMP
);

-- FTS5 virtual table for full-text search
CREATE VIRTUAL TABLE learnings_fts USING fts5(
  situation, approach, outcome, reasoning, alternative,
  content='learnings',
  content_rowid='id'
);

Usage Patterns

Recording Effective Patterns

await mcp__learning_library__record_learning({
  scope: 'user',
  type: 'effective',
  domain: 'Tools',
  situation: 'Searching codebase for broad patterns',
  approach: 'Use Task tool with Explore agent',
  outcome: 'Found all relevant files efficiently',
  reasoning: 'Explore agent optimized for codebase discovery'
});

Recording Ineffective Patterns (Anti-Patterns)

await mcp__learning_library__record_learning({
  scope: 'user',
  type: 'ineffective',
  domain: 'Tools',
  situation: 'Searching codebase for broad patterns',
  approach: 'Use Grep tool repeatedly',
  outcome: 'Slow, missed many relevant files',
  reasoning: 'Grep requires knowing exact search terms',
  alternative: 'Use Task tool with Explore agent for broad searches'
});

Querying Before Action

// Before starting a task, query relevant patterns
const toolPatterns = await mcp__learning_library__search_learnings({
  domain: 'Tools'
});

const protocolRules = await mcp__learning_library__search_learnings({
  domain: 'Protocol',
  type: 'effective'
});

Testing

Verify Installation

node index.js --version
# Output: learning-library-mcp v1.0.0

Test MCP Connection

// From Claude Code
await mcp__learning_library__search_learnings({});

File Locations

  • User-level database: ~/.cursor/learning-library.db
  • Project-level database: ${WORKSPACE_FOLDER}/.cursor/learning-library.db (if configured)
  • Export files: .cursor/learning-exports/

Architecture

  • Single-file design: Entire MCP server in index.js
  • SQLite + FTS5: Embedded database with full-text search
  • Zero dependencies (except better-sqlite3)
  • Environment variable configuration: No config files needed

License

MIT License - Copyright (c) 2025 Jason Lusk

#!/usr/bin/env node
/**
* Learning Library MCP Server
* A concise MCP server for capturing and retrieving universal working knowledge.
* Records meta-patterns about effective work with AI agents, tools, protocols, and processes.
*
* Version: 1.0.0
* License: MIT
*/
const Database = require('better-sqlite3');
const fs = require('fs');
const path = require('path');
const os = require('os');
const readline = require('readline');
const VERSION = '1.0.0';
// Expand environment variables in paths
function expandPath(filePath) {
if (!filePath) return filePath;
// Expand ${HOME} or $HOME
filePath = filePath.replace(/\$\{HOME\}|\$HOME/g, os.homedir());
// Expand ${WORKSPACE_FOLDER} or $WORKSPACE_FOLDER with current working directory
filePath = filePath.replace(/\$\{WORKSPACE_FOLDER\}|\$WORKSPACE_FOLDER/g, process.cwd());
// Expand ~ at the beginning
if (filePath.startsWith('~/')) {
filePath = path.join(os.homedir(), filePath.slice(2));
}
return filePath;
}
// Database schema with FTS5 full-text search + informed reasoning enhancements
const SCHEMA = `
CREATE TABLE IF NOT EXISTS learnings (
id INTEGER PRIMARY KEY AUTOINCREMENT,
type TEXT NOT NULL CHECK(type IN ('effective', 'ineffective')),
domain TEXT NOT NULL CHECK(domain IN ('Tools', 'Protocol', 'Communication', 'Process', 'Debugging', 'Decision')),
situation TEXT NOT NULL,
approach TEXT NOT NULL,
outcome TEXT NOT NULL,
reasoning TEXT NOT NULL,
alternative TEXT,
confidence REAL CHECK(confidence BETWEEN 0 AND 1),
revision_of INTEGER,
contradicts TEXT,
supports TEXT,
context TEXT,
assumptions TEXT,
limitations TEXT,
created_at TEXT DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (revision_of) REFERENCES learnings(id)
);
CREATE INDEX IF NOT EXISTS idx_domain_type ON learnings(domain, type);
CREATE INDEX IF NOT EXISTS idx_created ON learnings(created_at DESC);
CREATE VIRTUAL TABLE IF NOT EXISTS learnings_fts USING fts5(
situation, approach, outcome, reasoning, alternative, context, assumptions, limitations,
content='learnings',
content_rowid='id'
);
CREATE TRIGGER IF NOT EXISTS learnings_ai AFTER INSERT ON learnings BEGIN
INSERT INTO learnings_fts(rowid, situation, approach, outcome, reasoning, alternative, context, assumptions, limitations)
VALUES (new.id, new.situation, new.approach, new.outcome, new.reasoning, new.alternative, new.context, new.assumptions, new.limitations);
END;
CREATE TRIGGER IF NOT EXISTS learnings_ad AFTER DELETE ON learnings BEGIN
DELETE FROM learnings_fts WHERE rowid = old.id;
END;
CREATE TRIGGER IF NOT EXISTS learnings_au AFTER UPDATE ON learnings BEGIN
UPDATE learnings_fts SET situation=new.situation, approach=new.approach,
outcome=new.outcome, reasoning=new.reasoning, alternative=new.alternative,
context=new.context, assumptions=new.assumptions, limitations=new.limitations
WHERE rowid = new.id;
END;
`;
// Initialize database with schema
function initDatabase(dbPath) {
const dir = path.dirname(dbPath);
if (!fs.existsSync(dir)) {
fs.mkdirSync(dir, { recursive: true });
}
const db = new Database(dbPath);
db.exec(SCHEMA);
return db;
}
// Load databases from environment variables
function loadDatabases() {
const dbs = {};
// Always load user-level database (from env or default)
const userDbPath = expandPath(
process.env.LEARNING_LIBRARY_USER_DB ||
path.join(os.homedir(), '.cursor', 'learning-library.db')
);
dbs.user = initDatabase(userDbPath);
// Load project-level database if env var is set
if (process.env.LEARNING_LIBRARY_PROJECT_DB) {
const projectDbPath = expandPath(process.env.LEARNING_LIBRARY_PROJECT_DB);
dbs.project = initDatabase(projectDbPath);
}
return dbs;
}
// Record learning to specified database
function recordLearning(dbs, params) {
const scope = params.scope || 'user';
const db = dbs[scope];
if (!db) {
throw new Error(`Database scope '${scope}' not configured`);
}
const stmt = db.prepare(`
INSERT INTO learnings (
type, domain, situation, approach, outcome, reasoning, alternative,
confidence, revision_of, contradicts, supports, context, assumptions, limitations
)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
`);
const result = stmt.run(
params.type,
params.domain,
params.situation,
params.approach,
params.outcome,
params.reasoning,
params.alternative || null,
params.confidence || null,
params.revision_of || null,
params.contradicts || null,
params.supports || null,
params.context || null,
params.assumptions || null,
params.limitations || null
);
return {
success: true,
id: result.lastInsertRowid,
scope: scope,
message: `Learning recorded to ${scope}-level database`
};
}
// Query with FTS5 full-text search
function queryWithFTS(db, params, source) {
if (params.query) {
try {
// Escape FTS5 special characters by wrapping phrases in quotes
const escapedQuery = params.query.replace(/"/g, '""');
// Use FTS5 for text search with ranking
return db.prepare(`
SELECT l.*, ? as source FROM learnings l
JOIN learnings_fts ON l.id = learnings_fts.rowid
WHERE learnings_fts MATCH ?
AND (? IS NULL OR l.domain = ?)
AND (? IS NULL OR l.type = ?)
ORDER BY learnings_fts.rank
`).all(source, `"${escapedQuery}"`, params.domain, params.domain, params.type, params.type);
} catch (error) {
// If FTS5 query fails (invalid syntax), fall back to LIKE search
console.error('[learning-library-mcp] FTS5 query failed, falling back to LIKE:', error.message);
const likePattern = `%${params.query}%`;
return db.prepare(`
SELECT *, ? as source FROM learnings
WHERE (situation LIKE ? OR approach LIKE ? OR outcome LIKE ? OR reasoning LIKE ?)
AND (? IS NULL OR domain = ?)
AND (? IS NULL OR type = ?)
ORDER BY created_at DESC
`).all(source, likePattern, likePattern, likePattern, likePattern, params.domain, params.domain, params.type, params.type);
}
} else {
// Direct query without FTS
return db.prepare(`
SELECT *, ? as source FROM learnings
WHERE (? IS NULL OR domain = ?)
AND (? IS NULL OR type = ?)
ORDER BY created_at DESC
`).all(source, params.domain, params.domain, params.type, params.type);
}
}
// Search learnings across both databases
function searchLearnings(dbs, params) {
const results = [];
// Search project database first (higher priority)
if (dbs.project) {
const projectResults = queryWithFTS(dbs.project, params, 'project');
results.push(...projectResults);
}
// Search user database
const userResults = queryWithFTS(dbs.user, params, 'user');
results.push(...userResults);
// Deduplicate by (domain, situation, approach)
const seen = new Set();
const deduped = [];
for (const result of results) {
const key = `${result.domain}|${result.situation}|${result.approach}`;
if (!seen.has(key)) {
seen.add(key);
deduped.push(result);
}
}
// Apply limit
const limit = params.limit || parseInt(process.env.LEARNING_LIBRARY_MAX_RESULTS || '50');
return deduped.slice(0, limit);
}
// Export learnings to JSON or Markdown
function exportLearnings(dbs, params) {
const scope = params.scope || 'user';
const db = dbs[scope];
if (!db) {
throw new Error(`Database scope '${scope}' not configured`);
}
let query = 'SELECT * FROM learnings WHERE 1=1';
const bindings = [];
if (params.domain) {
query += ' AND domain = ?';
bindings.push(params.domain);
}
if (params.type) {
query += ' AND type = ?';
bindings.push(params.type);
}
query += ' ORDER BY created_at DESC';
const learnings = db.prepare(query).all(...bindings);
const format = params.format || 'json';
const timestamp = new Date().toISOString().replace(/[:.]/g, '-');
const filename = params.filename || `learnings_${scope}_${timestamp}.${format === 'json' ? 'json' : 'md'}`;
const exportDir = path.join(process.cwd(), '.cursor', 'learning-exports');
if (!fs.existsSync(exportDir)) {
fs.mkdirSync(exportDir, { recursive: true });
}
const exportPath = path.join(exportDir, filename);
if (format === 'json') {
fs.writeFileSync(exportPath, JSON.stringify(learnings, null, 2), 'utf8');
} else {
// Markdown format
let markdown = `# Learning Library Export\n\n`;
markdown += `**Scope:** ${scope}\n`;
markdown += `**Date:** ${new Date().toISOString()}\n`;
markdown += `**Count:** ${learnings.length}\n\n`;
for (const learning of learnings) {
markdown += `## ${learning.domain} - ${learning.type}\n\n`;
markdown += `**Situation:** ${learning.situation}\n\n`;
markdown += `**Approach:** ${learning.approach}\n\n`;
markdown += `**Outcome:** ${learning.outcome}\n\n`;
markdown += `**Reasoning:** ${learning.reasoning}\n\n`;
if (learning.alternative) {
markdown += `**Alternative:** ${learning.alternative}\n\n`;
}
markdown += `**Created:** ${learning.created_at}\n\n`;
markdown += `---\n\n`;
}
fs.writeFileSync(exportPath, markdown, 'utf8');
}
return {
success: true,
path: exportPath,
count: learnings.length,
format: format
};
}
// ============================================================================
// PHASE 2: ACTIVE REASONING - informed_reasoning Tool Implementation
// ============================================================================
// Helper: Estimate token count (rough approximation: 1 token ≈ 4 characters)
function estimateTokens(content) {
if (!content) return 0;
const str = typeof content === 'string' ? content : JSON.stringify(content);
return Math.ceil(str.length / 4);
}
// Helper: Extract protocol rules from local documentation
function extractProtocolRules(localDocs) {
const rules = [];
for (const doc of localDocs) {
// Handle both 'path' and 'source' fields for flexibility
const docPath = doc.path || doc.source || '';
if (docPath.includes('.cursor/rules/') || docPath.endsWith('CLAUDE.md')) {
// Extract rule-like patterns (lines starting with -, *, or numbered lists)
const lines = doc.content.split('\n');
for (const line of lines) {
if (/^[-*\d]+\.?\s+/.test(line.trim()) && line.length > 20) {
rules.push(line.trim());
}
}
}
}
return rules;
}
// Phase 1: ANALYZE - Identify relevant sources based on availableTools
function analyzePhase(params) {
const { problem, availableTools = [] } = params;
const suggestedQueries = {};
// Always suggest learning library query
if (availableTools.includes('learning-library') || availableTools.length === 0) {
suggestedQueries.learningLibrary = {
query: problem.toLowerCase().replace(/[?!.]/g, ''),
domain: null, // Search all domains
type: null // Both effective and ineffective
};
}
// Suggest local file searches
suggestedQueries.localFiles = [
'CLAUDE.md',
'.cursor/rules/*.mdc',
'README.md'
];
// Suggest Context7 library documentation query
if (availableTools.includes('context7')) {
suggestedQueries.context7 = {
libraryName: 'auto-detect',
query: problem
};
}
// Suggest Jira ticket search
if (availableTools.includes('jira') || availableTools.includes('mcp-atlassian')) {
suggestedQueries.jira = {
jql: `text ~ "${problem.split(' ').slice(0, 3).join(' ')}" ORDER BY updated DESC`
};
}
// Suggest Figma design search
if (availableTools.includes('figma') || availableTools.includes('figma-desktop')) {
suggestedQueries.figma = {
nodeId: null, // Request current selection
query: 'design specifications'
};
}
// Suggest GitHub code search
if (availableTools.includes('github')) {
suggestedQueries.github = {
query: problem.split(' ').slice(0, 5).join(' '),
scope: 'code'
};
}
// Suggest web search as fallback
suggestedQueries.webSearch = {
query: `${problem} best practices 2026`
};
return {
phase: 'analyze',
nextPhase: 'integrate',
suggestedQueries,
guidance: `Execute suggested queries to gather context. Prioritize learning library and local docs.`,
estimatedComplexity: problem.length > 100 ? 'high' : problem.split(' ').length > 10 ? 'medium' : 'low'
};
}
// Phase 2: INTEGRATE - Synthesize context from multiple sources
function integratePhase(params, dbs) {
const { problem, gatheredContext = {} } = params;
const synthesis = {
summary: {},
text: '',
estimatedThoughts: 3,
priority: []
};
let tokenCount = 0;
const MAX_TOKENS = 20000;
// TIER 1: MANDATORY - Project rules and protocols
const localDocs = gatheredContext.localDocs || [];
if (localDocs.length > 0) {
synthesis.summary.projectRules = extractProtocolRules(localDocs);
synthesis.priority.push('PROJECT_RULES');
tokenCount += estimateTokens(synthesis.summary.projectRules);
}
// TIER 2: HIGH PRIORITY - Effective learnings
const learnings = gatheredContext.learnings || [];
const effective = learnings
.filter(l => l.type === 'effective')
.sort((a, b) => (b.confidence || 0) - (a.confidence || 0))
.slice(0, 10);
if (effective.length > 0 && tokenCount < MAX_TOKENS * 0.5) {
synthesis.summary.effectiveLearnings = effective.map(l => ({
id: l.id,
approach: l.approach,
outcome: l.outcome,
confidence: l.confidence,
context: l.context
}));
synthesis.priority.push('EFFECTIVE_LEARNINGS');
tokenCount += estimateTokens(synthesis.summary.effectiveLearnings);
}
// TIER 2: Anti-patterns
const ineffective = learnings.filter(l => l.type === 'ineffective');
if (ineffective.length > 0 && tokenCount < MAX_TOKENS * 0.6) {
synthesis.summary.antiPatterns = ineffective.map(l => ({
id: l.id,
avoidApproach: l.approach,
useInstead: l.alternative,
reasoning: l.reasoning
}));
synthesis.priority.push('ANTI_PATTERNS');
tokenCount += estimateTokens(synthesis.summary.antiPatterns);
}
// TIER 3: External requirements (Jira, Figma, etc.)
const mcpData = gatheredContext.mcpData || {};
if (mcpData.jira && tokenCount < MAX_TOKENS * 0.75) {
synthesis.summary.requirements = {
summary: mcpData.jira.summary || mcpData.jira.fields?.summary,
description: mcpData.jira.description || mcpData.jira.fields?.description
};
synthesis.priority.push('JIRA_REQUIREMENTS');
tokenCount += estimateTokens(synthesis.summary.requirements);
}
if (mcpData.figma && tokenCount < MAX_TOKENS * 0.8) {
synthesis.summary.designSpecs = mcpData.figma;
synthesis.priority.push('FIGMA_DESIGNS');
tokenCount += estimateTokens(synthesis.summary.designSpecs);
}
// TIER 4: Truncated summaries (library docs, code examples, web)
if (mcpData.context7 && tokenCount < MAX_TOKENS * 0.9) {
synthesis.summary.libraryDocs = {
library: mcpData.context7.libraryName || 'auto-detected',
summary: mcpData.context7.docs ? String(mcpData.context7.docs).slice(0, 500) : 'N/A'
};
tokenCount += estimateTokens(synthesis.summary.libraryDocs);
}
// Generate synthesized text
let text = `# Context for: ${problem}\n\n`;
if (synthesis.summary.projectRules && synthesis.summary.projectRules.length > 0) {
text += `## Project Rules (MANDATORY)\n${synthesis.summary.projectRules.slice(0, 5).join('\n')}\n\n`;
}
if (synthesis.summary.effectiveLearnings && synthesis.summary.effectiveLearnings.length > 0) {
text += `## Effective Patterns (from past work)\n`;
for (const learning of synthesis.summary.effectiveLearnings.slice(0, 3)) {
text += `- **Approach:** ${learning.approach}\n`;
text += ` **Outcome:** ${learning.outcome} (confidence: ${learning.confidence || 'N/A'})\n`;
}
text += '\n';
}
if (synthesis.summary.antiPatterns && synthesis.summary.antiPatterns.length > 0) {
text += `## Anti-Patterns (AVOID)\n`;
for (const antiPattern of synthesis.summary.antiPatterns) {
text += `- **Don't:** ${antiPattern.avoidApproach}\n`;
text += ` **Instead:** ${antiPattern.useInstead}\n`;
}
text += '\n';
}
if (synthesis.summary.requirements) {
text += `## Requirements\n${synthesis.summary.requirements.summary}\n\n`;
}
synthesis.text = text;
// Estimate thoughts needed based on complexity
const complexity =
(synthesis.summary.projectRules?.length || 0) +
(synthesis.summary.effectiveLearnings?.length || 0) +
(synthesis.summary.antiPatterns?.length || 0);
synthesis.estimatedThoughts = Math.max(3, Math.min(10, Math.ceil(complexity / 3)));
return {
phase: 'integrate',
nextPhase: 'reason',
contextSummary: synthesis.summary,
synthesizedContext: synthesis.text,
guidance: `Begin reasoning with provided context. Estimated ${synthesis.estimatedThoughts} thoughts needed.`,
estimatedThoughts: synthesis.estimatedThoughts,
tokenBudget: {
used: tokenCount,
remaining: MAX_TOKENS - tokenCount,
warning: tokenCount > MAX_TOKENS * 0.9 ? 'Approaching token limit' : null
}
};
}
// Phase 3: REASON - Evaluate thought against context
function reasonPhase(params, dbs) {
const {
problem,
thought,
thoughtNumber,
totalThoughts,
nextThoughtNeeded,
isRevision = false,
revisesThought = null,
branchFromThought = null,
branchId = null,
needsMoreThoughts = false
} = params;
// Evaluate thought against available context
const evaluation = {
alignment: 'good',
protocolCompliant: true,
patternMatch: true,
issues: []
};
// Check if thought mentions implementing without reading
if (thought.toLowerCase().includes('implement') &&
!thought.toLowerCase().includes('read') &&
!thought.toLowerCase().includes('check') &&
!thought.toLowerCase().includes('verify')) {
evaluation.issues.push('Implementing without reading existing code first');
evaluation.alignment = 'poor';
}
// Check if thought suggests adding features not requested
if (thought.toLowerCase().match(/also|additionally|while we're at it/)) {
evaluation.issues.push('Potential scope creep detected');
}
// Provide guidance
let guidance = '';
if (nextThoughtNeeded === false) {
if (evaluation.issues.length === 0) {
guidance = 'Reasoning complete. Proceed to record phase.';
} else {
guidance = `Issues detected: ${evaluation.issues.join(', ')}. Consider revising.`;
}
} else {
guidance = 'Continue reasoning. Consider checking against effective patterns.';
}
// Suggest revision if major issues
const revisionSuggestion = evaluation.issues.length > 0 && !isRevision ? {
shouldRevise: true,
thought: thoughtNumber,
reason: evaluation.issues[0]
} : undefined;
// Suggest branching for exploration
const branchSuggestion =
thoughtNumber > 2 &&
thought.toLowerCase().match(/could|might|alternative|or/) &&
!branchId ? {
shouldBranch: true,
reason: 'Multiple approaches detected',
branchOptions: ['option-a', 'option-b']
} : undefined;
return {
phase: 'reason',
thoughtNumber,
totalThoughts: needsMoreThoughts ? totalThoughts + 1 : totalThoughts,
nextPhase: nextThoughtNeeded ? 'reason' : 'record',
guidance,
suggestedActions: evaluation.issues.length > 0 ? ['Review effective learnings', 'Check protocol rules'] : [],
nextThoughtNeeded,
branches: branchId ? [branchId] : [],
thoughtHistoryLength: thoughtNumber,
evaluation,
branchSuggestion,
revisionSuggestion
};
}
// Phase 4: RECORD - Capture learning
function recordPhase(params, dbs) {
const { problem, finalConclusion, relatedLearnings = [] } = params;
// Auto-capture learning from reasoning session
const learning = {
type: 'effective',
domain: 'Process',
situation: problem,
approach: finalConclusion,
outcome: 'Reasoning completed with context',
reasoning: 'Applied informed reasoning with multi-source context',
confidence: 0.7,
supports: relatedLearnings.length > 0 ? relatedLearnings.join(',') : null,
scope: 'user'
};
try {
const result = recordLearning(dbs, learning);
return {
phase: 'record',
recorded: true,
learningId: result.id,
learning,
guidance: `Learning recorded (ID: ${result.id}). Available for future reasoning sessions.`,
relatedLearnings
};
} catch (error) {
return {
phase: 'record',
recorded: false,
learning,
guidance: `Failed to record learning: ${error.message}`,
relatedLearnings
};
}
}
// Main handler for informed_reasoning tool
function informedReasoning(dbs, params) {
const { phase } = params;
switch (phase) {
case 'analyze':
return analyzePhase(params);
case 'integrate':
return integratePhase(params, dbs);
case 'reason':
return reasonPhase(params, dbs);
case 'record':
return recordPhase(params, dbs);
default:
throw new Error(`Unknown phase: ${phase}. Must be 'analyze', 'integrate', 'reason', or 'record'.`);
}
}
// ============================================================================
// END PHASE 2 IMPLEMENTATION
// ============================================================================
// MCP Protocol implementation
async function main() {
// Handle --version flag
if (process.argv.includes('--version')) {
console.log(`learning-library-mcp v${VERSION}`);
process.exit(0);
}
// Load databases
let dbs;
try {
dbs = loadDatabases();
console.error('[learning-library-mcp] Databases loaded successfully');
} catch (error) {
console.error('[learning-library-mcp] FATAL: Failed to load databases:', error.message);
console.error('[learning-library-mcp] Stack:', error.stack);
process.exit(1);
}
// Use readline for proper line-by-line stdin reading
const rl = readline.createInterface({
input: process.stdin,
output: process.stdout,
terminal: false
});
rl.on('line', async (line) => {
try {
if (!line.trim()) return;
const request = JSON.parse(line);
await handleRequest(dbs, request);
} catch (error) {
// For parse errors, try to extract ID from malformed JSON, otherwise use -1
let requestId = -1;
try {
const partialParse = JSON.parse(line);
if (partialParse && partialParse.id !== undefined) {
requestId = partialParse.id;
}
} catch (e) {
// If we can't even partially parse, use -1
}
console.error('[learning-library-mcp] Parse error:', error.message);
sendError(requestId, -32700, 'Parse error', error.message);
}
});
// Keep process alive indefinitely for MCP persistent connection
// This interval ensures the event loop never exits
setInterval(() => {
// Keep-alive: This keeps the process running
// MCP servers must maintain persistent stdio connection
}, 2147483647); // Maximum safe timeout (~24.8 days)
}
// Handle MCP requests
async function handleRequest(dbs, request) {
try {
const { id, method, params } = request;
// Ensure we have a valid ID (required by JSON-RPC 2.0)
const requestId = (id !== undefined && id !== null) ? id : -1;
switch (method) {
case 'initialize':
sendResponse(requestId, {
protocolVersion: '2024-11-05',
capabilities: {
tools: {}
},
serverInfo: {
name: 'learning-library',
version: VERSION
}
});
break;
case 'tools/list':
sendResponse(requestId, {
tools: [
{
name: 'record_learning',
description: 'Record a learning to the library',
inputSchema: {
type: 'object',
properties: {
scope: { type: 'string', enum: ['user', 'project'], default: 'user' },
type: { type: 'string', enum: ['effective', 'ineffective'] },
domain: { type: 'string', enum: ['Tools', 'Protocol', 'Communication', 'Process', 'Debugging', 'Decision'] },
situation: { type: 'string' },
approach: { type: 'string' },
outcome: { type: 'string' },
reasoning: { type: 'string' },
alternative: { type: 'string' },
confidence: { type: 'number', minimum: 0, maximum: 1 },
revision_of: { type: 'integer' },
contradicts: { type: 'string' },
supports: { type: 'string' },
context: { type: 'string' },
assumptions: { type: 'string' },
limitations: { type: 'string' }
},
required: ['type', 'domain', 'situation', 'approach', 'outcome', 'reasoning']
}
},
{
name: 'search_learnings',
description: 'Search learnings with FTS5 full-text search',
inputSchema: {
type: 'object',
properties: {
query: { type: 'string' },
domain: { type: 'string', enum: ['Tools', 'Protocol', 'Communication', 'Process', 'Debugging', 'Decision'] },
type: { type: 'string', enum: ['effective', 'ineffective'] },
limit: { type: 'number', default: 50 }
}
}
},
{
name: 'export_learnings',
description: 'Export learnings to JSON or Markdown',
inputSchema: {
type: 'object',
properties: {
scope: { type: 'string', enum: ['user', 'project'], default: 'user' },
format: { type: 'string', enum: ['json', 'markdown'], default: 'json' },
domain: { type: 'string', enum: ['Tools', 'Protocol', 'Communication', 'Process', 'Debugging', 'Decision'] },
type: { type: 'string', enum: ['effective', 'ineffective'] },
filename: { type: 'string' }
}
}
},
{
name: 'informed_reasoning',
description: 'Multi-phase reasoning with context-aware guidance. Phases: analyze (suggest queries), integrate (synthesize context), reason (evaluate thoughts), record (capture learning).',
inputSchema: {
type: 'object',
properties: {
phase: { type: 'string', enum: ['analyze', 'integrate', 'reason', 'record'] },
problem: { type: 'string' },
availableTools: { type: 'array', items: { type: 'string' } },
gatheredContext: {
type: 'object',
properties: {
learnings: { type: 'array' },
localDocs: { type: 'array' },
mcpData: { type: 'object' },
webResults: { type: 'array' }
}
},
thought: { type: 'string' },
thoughtNumber: { type: 'integer', minimum: 1 },
totalThoughts: { type: 'integer', minimum: 1 },
nextThoughtNeeded: { type: 'boolean' },
isRevision: { type: 'boolean' },
revisesThought: { type: 'integer', minimum: 1 },
branchFromThought: { type: 'integer', minimum: 1 },
branchId: { type: 'string' },
needsMoreThoughts: { type: 'boolean' },
finalConclusion: { type: 'string' },
relatedLearnings: { type: 'array', items: { type: 'integer' } }
},
required: ['phase']
}
}
]
});
break;
case 'tools/call':
const toolName = params.name;
const toolParams = params.arguments || {};
let result;
switch (toolName) {
case 'record_learning':
result = recordLearning(dbs, toolParams);
break;
case 'search_learnings':
result = searchLearnings(dbs, toolParams);
break;
case 'export_learnings':
result = exportLearnings(dbs, toolParams);
break;
case 'informed_reasoning':
result = informedReasoning(dbs, toolParams);
break;
default:
throw new Error(`Unknown tool: ${toolName}`);
}
sendResponse(requestId, {
content: [
{
type: 'text',
text: JSON.stringify(result, null, 2)
}
]
});
break;
default:
sendError(requestId, -32601, 'Method not found', `Unknown method: ${method}`);
}
} catch (error) {
console.error('[learning-library-mcp] Error handling request:', error.message);
console.error('[learning-library-mcp] Stack:', error.stack);
console.error('[learning-library-mcp] Request:', JSON.stringify(request));
const errorId = (request.id !== undefined && request.id !== null) ? request.id : -1;
sendError(errorId, -32603, 'Internal error', error.message);
}
}
// Send JSON-RPC response
function sendResponse(id, result) {
const response = {
jsonrpc: '2.0',
id: id,
result: result
};
console.error(`[learning-library-mcp] Sending response for id=${id}, method=${result.protocolVersion ? 'initialize' : result.tools ? 'tools/list' : 'unknown'}`);
console.log(JSON.stringify(response));
}
// Send JSON-RPC error
function sendError(id, code, message, data) {
const response = {
jsonrpc: '2.0',
id: id,
error: {
code: code,
message: message,
data: data
}
};
console.error(`[learning-library-mcp] Sending error for id=${id}, code=${code}, message=${message}`);
console.log(JSON.stringify(response));
}
// Start server
main().catch(error => {
console.error('Fatal error:', error);
process.exit(1);
});
{
"name": "learning-library-mcp",
"version": "1.0.0",
"description": "MCP server for universal working knowledge storage and retrieval",
"author": "Jason Lusk <jason@jasonlusk.com>",
"license": "MIT",
"main": "index.js",
"bin": {
"learning-library": "./index.js"
},
"dependencies": {
"better-sqlite3": "^11.0.0"
},
"engines": {
"node": ">=14.0.0"
},
"keywords": [
"mcp",
"model-context-protocol",
"learning-library",
"ai-assistant",
"knowledge-management",
"pattern-library"
]
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment