Skip to content

Instantly share code, notes, and snippets.

@mpalpha
Last active January 24, 2026 05:14
Show Gist options
  • Select an option

  • Save mpalpha/c2f1723868c86343e590ed38e80f264d to your computer and use it in GitHub Desktop.

Select an option

Save mpalpha/c2f1723868c86343e590ed38e80f264d to your computer and use it in GitHub Desktop.
protocol enforcer mcp server - "setup protocol enforcement using https://gist.github.com/mpalpha/c2f1723868c86343e590ed38e80f264d"

Protocol Enforcer MCP Server

A Model Context Protocol (MCP) server that enforces custom workflow protocols before allowing AI agents to perform file operations.

Author: Jason Lusk jason@jasonlusk.com License: MIT Gist URL: https://gist.github.com/mpalpha/c2f1723868c86343e590ed38e80f264d

What This Does

Universal gatekeeper for AI coding assistants supporting Model Context Protocol:

  • ✅ Works with any MCP-compatible client (Claude Code, Cursor, Cline, Zed, Continue)
  • ✅ Enforces custom protocol steps before planning/coding
  • ✅ Tracks required checklist items specific to your project
  • ✅ Records compliance violations over time
  • ✅ Fully configurable - adapt to any workflow
  • ✅ Runs from npx - no installation needed

What's New

v2.0.1 (Bug Fix) - January 2026

Fixed: Deadlock in authorize_file_operation parameter validation

  • Changed operation_token from required to optional in MCP schema
  • Function now handles missing token gracefully with helpful error message
  • Resolves UX issue where hook error message was misleading users
  • Impact: Users now get clear guidance: "No operation token provided. You must call verify_protocol_compliance first to obtain a token."

v2.0 (Foundation Strategies)

Version 2.0.0 introduces Foundation Strategies - a comprehensive optimization framework that reduces protocol overhead by 66-71% while maintaining strict enforcement.

Key Features

🚀 Performance Optimization

  • Session Tokens: Multi-tool workflows without repeated protocol flows (60-minute TTL)
  • Macro-Gates: 21 checklist items → 3 gates (ANALYZE, PLAN, VALIDATE)
  • Fast-Track Mode: Low-complexity tasks skip PLAN gate automatically
  • Unified Tokens: Single token file replaces 3 separate files

📉 Overhead Reduction

  • Baseline (v1.0): ~16 tool calls per workflow
  • Foundation (v2.0): ~5.3 tool calls per workflow
  • Total Reduction: 66-71% fewer protocol interactions

⚙️ New Configuration Presets

  • config.foundation.json - All optimizations enabled (recommended)
  • Backward compatible with v1.0 configurations
  • Fast-track criteria: single-line edits, comment changes, string literals

Breaking Changes

⚠️ New Dependency: uuid@^9.0.0 (Phase 4: Unified Tokens)

  • First npx install will download uuid from npm
  • Pure JavaScript, no native compilation
  • If you prefer zero dependencies, use v1.0 configs (disable unified_tokens strategy)

Migration from v1.0

Option 1: Full Foundation (Recommended)

# Download Foundation config
curl -o .protocol-enforcer.json https://gist.githubusercontent.com/mpalpha/c2f1723868c86343e590ed38e80f264d/raw/config.foundation.json

# Reload IDE

Option 2: Selective Adoption Update your existing config:

{
  "version": "2.0.0",
  "strategies_enabled": {
    "session_tokens": true,      // Enable for 60-min workflows
    "extended_enforcement": true, // Add WebSearch/Grep/Task to enforcement
    "macro_gates": true,          // Simplify checklist to 3 gates
    "fast_track_mode": true,      // Skip PLAN for trivial changes
    "unified_tokens": false       // Disable to avoid uuid dependency
  }
}

Option 3: Stay on v1.0 No changes needed - v2.0 is fully backward compatible with v1.0 configs.


Platform Support

Platform Config File Hook Support Enforcement
Claude Code .mcp.json or ~/.claude.json ✅ Full (all 5 hooks) Automatic blocking
Cursor ~/.cursor/mcp.json ✅ Standard (PreToolUse) Automatic blocking
Cline ~/.cline/mcp.json ⚠️ Limited (PostToolUse) Audit only
Zed ~/.config/zed/mcp.json ❌ None Voluntary
Continue ~/.continue/mcp.json ⚠️ Limited Voluntary

Available Hooks: user_prompt_submit, session_start, pre_tool_use, post_tool_use, stop


Quick Installation (Users)

Complete setup in 6 steps - installs MCP server + hooks for automatic enforcement.

1. Add MCP Server

Add to your platform's MCP config file (paths above):

{
  "mcpServers": {
    "protocol-enforcer": {
      "command": "npx",
      "args": ["-y", "https://gist.github.com/mpalpha/c2f1723868c86343e590ed38e80f264d"]
    }
  }
}

Claude Code only: If using .claude/settings.local.json with enabledMcpjsonServers, add "protocol-enforcer".

2. Create Configuration

Download the Foundation config template (recommended):

curl -o .protocol-enforcer.json \
  https://gist.githubusercontent.com/mpalpha/c2f1723868c86343e590ed38e80f264d/raw/config.foundation.json

Or create minimal config in .protocol-enforcer.json:

{
  "enforced_rules": {
    "require_protocol_steps": [
      {
        "name": "planning",
        "hook": "pre_tool_use",
        "applies_to": ["Write", "Edit"]
      }
    ],
    "require_checklist_confirmation": true,
    "minimum_checklist_items": 2
  },
  "checklist_items": [
    {
      "text": "Requirements gathered",
      "hook": "pre_tool_use"
    },
    {
      "text": "Existing patterns analyzed",
      "hook": "pre_tool_use"
    },
    {
      "text": "Linting passed",
      "hook": "post_tool_use"
    }
  ]
}

See: Example Configurations for more options.

3. Install Enforcement Hooks (Required for Automatic Blocking)

Create hooks directory:

mkdir -p .cursor/hooks

Create hook files from Appendix C:

You need to create 3 hook scripts. See Appendix C: Hook Scripts Reference for complete code.

Required hooks:

  1. .cursor/hooks/pre-tool-use.cjs - Blocks Write/Edit operations without valid tokens (CRITICAL FOR ENFORCEMENT)
  2. .cursor/hooks/post-tool-use.cjs - Logs successful operations to audit trail
  3. .cursor/hooks/user-prompt-submit.cjs - Reminds Claude to use informed_reasoning first

Quick setup (AI-assisted installation recommended):

  • If installing via Claude: "Create hooks from Appendix C"
  • If installing manually: Copy hook code from Appendix C sections below

Make all executable:

chmod +x .cursor/hooks/*.cjs

What each hook does:

  • pre-tool-use.cjs - Blocks Write/Edit operations without valid tokens (CRITICAL FOR ENFORCEMENT)
  • post-tool-use.cjs - Logs successful operations to audit trail
  • user-prompt-submit.cjs - Reminds Claude to use informed_reasoning first

To skip hooks: MCP tools will still work, but no automatic blocking. Claude must voluntarily comply.

4. Configure Hooks in Claude Code

Add to .claude/settings.json (project) or ~/.claude/settings.json (user):

{
  "hooks": {
    "UserPromptSubmit": [{
      "hooks": [{
        "type": "command",
        "command": "${workspaceFolder}/.cursor/hooks/user-prompt-submit.cjs"
      }]
    }],
    "PreToolUse": [{
      "matcher": "Write|Edit|NotebookEdit",
      "hooks": [{
        "type": "command",
        "command": "${workspaceFolder}/.cursor/hooks/pre-tool-use.cjs"
      }]
    }],
    "PostToolUse": [{
      "matcher": "Write|Edit|NotebookEdit",
      "hooks": [{
        "type": "command",
        "command": "${workspaceFolder}/.cursor/hooks/post-tool-use.cjs"
      }]
    }]
  }
}

Note: Replace ${workspaceFolder} with absolute path if needed.

5. Reload IDE

  • Claude Code/Cursor/VSCode: Cmd+Shift+P → "Developer: Reload Window"
  • Zed: Restart Zed

6. Verify

Test MCP connection:

await mcp__protocol_enforcer__get_protocol_config();

Test enforcement (should be blocked):

// Try to write without authorization - should fail
await Write({ file_path: "test.txt", content: "test" });
// Expected: ⛔ protocol-enforcer: must call informed_reasoning (analyze phase) first

Installation for AI Agents

When a user requests installation:

  1. Detect platform - Check which AI assistant and MCP config location
  2. Analyze project - Read ALL rule files (.cursor/rules/**/*, .cursorrules, .github/**/*, docs/**/*, etc.)
  3. Extract requirements - Identify protocol steps, checklist items, and behavioral rules
  4. Determine hook support - Configure based on platform capabilities (see table above)
  5. Propose configuration - Present tailored config matching project workflow
  6. Get approval - Confirm before creating files

Detailed guide: See Appendix A: AI Agent Installation Guide


Configuration Reference

Required Format

All protocol steps and checklist items must be objects with a hook property (string format not supported).

Protocol Step Object:

{
  "name": "step_name",
  "hook": "pre_tool_use",
  "applies_to": ["Write", "Edit"]  // Optional: tool-specific filtering
}

Checklist Item Object:

{
  "text": "Item description",
  "hook": "pre_tool_use",
  "applies_to": ["Write", "Edit"]  // Optional: tool-specific filtering
}

Available Hooks

Hook When Use Case
user_prompt_submit Before processing user message Pre-response checks, sequential thinking
session_start At session initialization Display requirements, initialize tracking
pre_tool_use Before tool execution Primary enforcement point for file operations
post_tool_use After tool execution Validation, linting, audit logging
stop Before session termination Compliance reporting, cleanup

Example Configurations

Minimal (3 items):

{
  "enforced_rules": {
    "require_protocol_steps": [
      { "name": "sequential_thinking", "hook": "user_prompt_submit" },
      { "name": "planning", "hook": "pre_tool_use", "applies_to": ["Write", "Edit"] }
    ],
    "require_checklist_confirmation": true,
    "minimum_checklist_items": 2
  },
  "checklist_items": [
    { "text": "Sequential thinking completed FIRST", "hook": "user_prompt_submit" },
    { "text": "Plan created and confirmed", "hook": "pre_tool_use" },
    { "text": "Completion verified", "hook": "post_tool_use" }
  ]
}

See also:

  • config.minimal.json - Basic workflow (6 items)
  • config.development.json - Full development workflow (17 items)
  • config.behavioral.json - LLM behavioral corrections (12 items)

MCP Tools

1. verify_protocol_compliance

Verify protocol steps completed for a specific hook.

Parameters:

Parameter Type Required Description
hook string ✅ Yes Lifecycle point: user_prompt_submit, session_start, pre_tool_use, post_tool_use, stop
tool_name string No Tool being called (Write, Edit) for tool-specific filtering
protocol_steps_completed string[] ✅ Yes Completed step names from config
checklist_items_checked string[] ✅ Yes Verified checklist items from config

Example:

const verification = await mcp__protocol_enforcer__verify_protocol_compliance({
  hook: "pre_tool_use",
  tool_name: "Write",
  protocol_steps_completed: ["planning", "analysis"],
  checklist_items_checked: ["Plan confirmed", "Patterns analyzed"]
});

// Returns: { compliant: true, operation_token: "abc123...", token_expires_in_seconds: 60 }
// Or: { compliant: false, violations: [...] }

2. authorize_file_operation

MANDATORY before Write/Edit (when using PreToolUse hooks).

Parameters:

Parameter Type Required Description
operation_token string ✅ Yes Token from verify_protocol_compliance

Token rules: Single-use, 60-second expiration, writes ~/.protocol-enforcer-token for hook verification.

3. get_protocol_config

Get current configuration.

Returns: { config_path: "...", config: {...} }

4. get_compliance_status

Get compliance statistics and recent violations.

Returns: { total_checks: N, passed: N, failed: N, recent_violations: [...] }

5. initialize_protocol_config

Create new config file.

Parameters: scope: "project" | "user"


Usage Workflow

Complete Example (All Hooks with informed_reasoning)

// 1. MANDATORY FIRST STEP: Call informed_reasoning (analyze phase)
// This writes ~/.protocol-informed-reasoning-token
await mcp__memory_augmented_reasoning__informed_reasoning({
  phase: "analyze",
  problem: "User request description and context needed"
});

// 2. At user message (user_prompt_submit hook) - Optional verification
await mcp__protocol_enforcer__verify_protocol_compliance({
  hook: "user_prompt_submit",
  protocol_steps_completed: ["informed_reasoning_analyze"],
  checklist_items_checked: ["Used informed_reasoning (analyze phase) tool"]
});

// 3. Before file operations (pre_tool_use hook)
const verification = await mcp__protocol_enforcer__verify_protocol_compliance({
  hook: "pre_tool_use",
  tool_name: "Write",
  protocol_steps_completed: ["informed_reasoning_analyze", "planning", "analysis"],
  checklist_items_checked: [
    "Used informed_reasoning (analyze phase) tool before proceeding",
    "Plan confirmed",
    "Patterns analyzed"
  ]
});

// 4. Authorize file operation
// This writes ~/.protocol-enforcer-token
if (verification.compliant) {
  await mcp__protocol_enforcer__authorize_file_operation({
    operation_token: verification.operation_token
  });

  // Now Write/Edit operations allowed
  // PreToolUse hook will check for BOTH tokens
}

// 5. After file operations (post_tool_use hook)
await mcp__protocol_enforcer__verify_protocol_compliance({
  hook: "post_tool_use",
  tool_name: "Write",
  protocol_steps_completed: ["execution"],
  checklist_items_checked: ["Linting passed", "Types checked"]
});

Hook Filtering

  • Only rules with matching hook value are checked
  • If applies_to specified, tool name must match
  • Enables context-specific enforcement at different lifecycle points

Hook-Based Enforcement

For automatic blocking of unauthorized file operations (Claude Code, Cursor only).

Installation

  1. Create hooks directory:
mkdir -p .cursor/hooks
  1. Create hook scripts from templates (see Appendix C)

  2. Make executable:

chmod +x .cursor/hooks/*.cjs
  1. Configure platform:

Claude Code CLI (v2.1.7+) - Add to .claude/settings.json (project) or ~/.claude/settings.json (user):

{
  "hooks": {
    "PreToolUse": [
      {
        "matcher": "Write|Edit|NotebookEdit",
        "hooks": [
          {
            "type": "command",
            "command": "/absolute/path/.cursor/hooks/pre-tool-use.cjs"
          }
        ]
      }
    ],
    "PostToolUse": [
      {
        "matcher": "Write|Edit|NotebookEdit",
        "hooks": [
          {
            "type": "command",
            "command": "/absolute/path/.cursor/hooks/post-tool-use.cjs"
          }
        ]
      }
    ]
  }
}

IMPORTANT: Hook response format as of Claude Code v2.1.7+ (Jan 2026):

// Deny operation
console.log(JSON.stringify({
  hookSpecificOutput: {
    hookEventName: "PreToolUse",
    permissionDecision: "deny",
    permissionDecisionReason: "Reason shown to Claude/user"
  }
}));

// Allow operation
console.log(JSON.stringify({
  hookSpecificOutput: {
    hookEventName: "PreToolUse",
    permissionDecision: "allow"
  }
}));

Replace /absolute/path/ with your actual project path.

Token Lifecycle (Two-Token Verification)

1. AI calls informed_reasoning (analyze phase)
   → writes ~/.protocol-informed-reasoning-token

2. AI calls verify_protocol_compliance → receives operation_token (60s expiration)

3. AI calls authorize_file_operation(token) → writes ~/.protocol-enforcer-token

4. AI attempts Write/Edit → PreToolUse hook intercepts
   CHECK 1: informed_reasoning token exists?
   CHECK 2: protocol-enforcer token exists?
   - Both found → consume both (delete), allow operation
   - Either missing → block operation

5. Next Write/Edit → Both tokens missing → blocked

Result: Two-factor verification ensures both thinking and protocol compliance.

Why Two Tokens?

  • Reasoning Token: Physical proof that informed_reasoning tool was actually called
  • Protocol Token: Authorization after verifying all protocol steps and checklist items
  • Cross-Process Verification: Hooks run separately from MCP server, tokens provide shared state

Integration with Supervisor Protocols

Add to your project's supervisor rules:

Claude Code: .cursor/rules/protocol-enforcer.mdc Cursor: .cursorrules Cline: .clinerules Continue: .continuerules

## Protocol Enforcer Integration (MANDATORY)

Before ANY file write/edit operation:
1. Complete required protocol steps from `.protocol-enforcer.json`
2. Call `mcp__protocol_enforcer__verify_protocol_compliance` with:
   - `hook`: lifecycle point (e.g., "pre_tool_use")
   - `protocol_steps_completed`: completed step names
   - `checklist_items_checked`: verified items
3. If `compliant: false`, fix violations and retry
4. Call `mcp__protocol_enforcer__authorize_file_operation` with token
5. Only proceed if `authorized: true`

**No exceptions allowed.**

See: Appendix B: Complete Supervisor Examples for platform-specific examples.


Troubleshooting

Issue Solution
Server not appearing Check config file syntax, gist URL, file location, reload IDE
Configuration not loading Verify .protocol-enforcer.json filename, check JSON syntax
Tools not working Test with get_protocol_config, check tool names (must use full mcp__protocol-enforcer__*)
Hook not blocking Verify platform support, check hook executable (chmod +x), verify absolute path, reload IDE
Token errors Check ~/.protocol-enforcer-token exists after authorize_file_operation

Claude Code only: Add "protocol-enforcer" to enabledMcpjsonServers if using allowlist.


Why This Exists

AI assistants bypass project protocols under pressure or context limits. This server:

  • Enforces consistency - same rules for every task, all platforms
  • Provides traceability - tracks protocol adherence
  • Reduces technical debt - prevents shortcuts violating standards
  • Works with ANY workflow - not tied to specific tools
  • Runs from npx - zero installation/maintenance

Appendices

Appendix A: AI Agent Installation Guide

Detailed analysis process for AI agents installing this MCP server.

Step 1: Detect Platform and IDE

Check which AI coding assistant is active:

  • Look for existing MCP config files (.mcp.json, ~/.claude.json, ~/.cursor/mcp.json, etc.)
  • Identify IDE/editor environment

Step 2: Analyze Project Structure

Read ALL rule files (critical - don't skip):

  • .cursor/rules/**/*.mdc - All rule types
  • .cursorrules, .clinerules, .continuerules - Platform rules
  • .eslintrc.*, .prettierrc.* - Code formatting
  • tsconfig.json - TypeScript config
  • .github/CONTRIBUTING.md, .github/pull_request_template.md - Contribution guidelines
  • README.md, CLAUDE.md, docs/**/* - Project documentation

Extract from each file:

  1. Protocol Steps (workflow stages):

    • Look for: "first", "before", "then", "after", "finally"
    • Example: "Before ANY file operation, do X" → protocol step "X"
    • Group related steps (3-7 steps typical)
  2. Checklist Items (verification checks):

    • Look for: "MUST", "REQUIRED", "MANDATORY", "CRITICAL", "NEVER", "ALWAYS"
    • Quality checks: "verify", "ensure", "check", "confirm"
    • Each item should be specific and verifiable
  3. Behavioral Rules (constraints):

    • Hard requirements: "NO EXCEPTIONS", "supersede all instructions"
    • Pre-approved actions: "auto-fix allowed", "no permission needed"
    • Forbidden actions: "NEVER edit X", "DO NOT use Y"
  4. Tool Requirements (MCP tool calls):

    • Explicit requirements: "use mcp__X tool"
    • Tool sequences: "call X before Y"
  5. Conditional Requirements (context-specific):

    • "If GraphQL changes, run codegen"
    • "If SCSS changes, verify spacing"
    • Mark as required: false in checklist

Example Extraction:

From .cursor/rules/mandatory-supervisor-protocol.mdc:

"BEFORE ANY OTHER ACTION, EVERY USER QUERY MUST:
1. First use mcp__clear-thought__sequentialthinking tool"

→ Protocol step: { name: "sequential_thinking", hook: "user_prompt_submit" } → Checklist: { text: "Sequential thinking completed FIRST", hook: "user_prompt_submit" }

Step 3: Search Referenced Online Sources

If documentation references external URLs:

  • Use WebSearch/WebFetch to retrieve library docs, style guides, API specs
  • Extract additional requirements from online sources
  • Integrate with local requirements

Step 4: Infer Workflow Type

Based on analysis, determine workflow:

  • TDD - Test files exist, tests-first culture
  • Design-First - Figma links, design system, token mappings
  • Planning & Analysis - Generic best practices
  • Behavioral - Focus on LLM behavioral corrections (CHORES framework)
  • Minimal - Small projects, emergency mode

Step 5: Determine Hook Support

Configure based on platform capabilities:

Platform Recommended Hooks Strategy
Claude Code All 5 hooks Maximum enforcement
Cursor PreToolUse + PostToolUse Standard enforcement
Cline PostToolUse only Audit logging
Zed/Continue None Voluntary compliance

Step 6: Propose Configuration

  1. Present findings: "I've analyzed [N] rule files and detected [workflow type]. Your platform ([platform]) supports [hooks]."
  2. Show proposed config with extracted steps and checklist items
  3. Explain trade-offs: With/without hooks, full vs. minimal enforcement
  4. Get approval before creating files

Step 7: Create Files

  1. Add MCP server to config file
  2. Create .protocol-enforcer.json with tailored configuration
  3. Create hook scripts if platform supports them
  4. Update supervisor protocol files with integration instructions
  5. Reload IDE

Appendix B: Complete Supervisor Examples

Example 1: Planning & Analysis (Claude Code)

File: .cursor/rules/protocol-enforcer.mdc

---
description: Planning & Analysis Protocol with PreToolUse Hooks
globs:
alwaysApply: true
---

## Protocol Enforcer Integration (MANDATORY)

### Required Steps (from .protocol-enforcer.json):
1. **sequential_thinking** - Complete before responding
2. **planning** - Plan implementation with objectives
3. **analysis** - Analyze codebase for reusable patterns

### Required Checklist:
- Sequential thinking completed FIRST
- Searched for reusable components/utilities
- Matched existing code patterns
- Plan confirmed by user

### Workflow:

**CRITICAL OVERRIDE RULE:**
BEFORE ANY ACTION, call `mcp__clear-thought__sequentialthinking` then `mcp__protocol_enforcer__verify_protocol_compliance`.
NO EXCEPTIONS.

**Process:**

1. **Sequential Thinking** (user_prompt_submit hook)
   - Use sequentialthinking tool
   - Verify: `mcp__protocol_enforcer__verify_protocol_compliance({ hook: "user_prompt_submit", ... })`

2. **Planning**
   - Define objectives, files to modify, dependencies
   - Mark `planning` complete

3. **Analysis**
   - Search codebase for similar features
   - Review `src/components/`, `src/hooks/`, `src/utils/`
   - Mark `analysis` complete

4. **Verify Compliance**
   ```typescript
   const v = await mcp__protocol_enforcer__verify_protocol_compliance({
     hook: "pre_tool_use",
     tool_name: "Write",
     protocol_steps_completed: ["planning", "analysis"],
     checklist_items_checked: [
       "Searched for reusable components/utilities",
       "Matched existing code patterns",
       "Plan confirmed by user"
     ]
   });
  1. Authorize

    await mcp__protocol_enforcer__authorize_file_operation({
      operation_token: v.operation_token
    });
  2. Implement

    • Only after authorization
    • Minimal changes only
    • No scope creep

Enforcement:

PreToolUse hooks block unauthorized file operations. Token required per file change (60s expiration).


**Config:** `config.development.json`

---

#### Example 2: Design-First (Cursor)

**File:** `.cursorrules`

Design-First Development Protocol

Required Steps:

  1. design_review - Review Figma specs
  2. component_mapping - Map to existing/new components

Required Checklist:

  • Design tokens mapped to SCSS variables
  • Figma specs reviewed
  • Accessibility requirements checked
  • Responsive breakpoints defined

Workflow:

1. Design Review

  • Open Figma, extract design tokens (colors, spacing, typography)
  • Note accessibility (ARIA, keyboard nav)
  • Document responsive breakpoints

2. Component Mapping

  • Search for similar components
  • Decide: reuse, extend, or create
  • Map Figma tokens to SCSS variables

3. Verify Compliance

mcp__protocol_enforcer__verify_protocol_compliance({
  hook: "pre_tool_use",
  tool_name: "Write",
  protocol_steps_completed: ["design_review", "component_mapping"],
  checklist_items_checked: [
    "Design tokens mapped to SCSS variables",
    "Figma specs reviewed",
    "Accessibility requirements checked"
  ]
})

4. Authorize & Implement

After verification, authorize then proceed with component implementation.


**Config:** Custom design-focused config with `design_review` and `component_mapping` steps.

---

#### Example 3: Behavioral Corrections (Any Platform)

**File:** `.cursor/rules/behavioral-protocol.mdc`

```markdown
---
description: LLM Behavioral Corrections (MODEL Framework CHORES)
alwaysApply: true
---

## Protocol Enforcer Integration (MANDATORY)

Enforces behavioral corrections from MODEL Framework CHORES analysis.

### Required Steps:
1. **analyze_behavior** - Analyze response for CHORES issues
2. **apply_chores_fixes** - Apply corrections before file operations

### Required Checklist (CHORES):
- **C**onstraint issues addressed (structure/format adherence)
- **H**allucination issues addressed (no false information)
- **O**verconfidence addressed (uncertainty when appropriate)
- **R**easoning issues addressed (logical consistency)
- **E**thical/Safety issues addressed (no harmful content)
- **S**ycophancy addressed (truthfulness over agreement)

### Workflow:

1. **Analyze Behavior** (user_prompt_submit)
   - Review response for CHORES issues
   - Verify: `mcp__protocol_enforcer__verify_protocol_compliance({ hook: "user_prompt_submit", ... })`

2. **Apply Fixes** (pre_tool_use)
   - Address identified CHORES issues
   - Verify all checklist items before file ops
   - Authorize with token

### Enforcement:
This config uses the default behavioral corrections from `index.js` DEFAULT_CONFIG.

Config: config.behavioral.json


Example 4: Minimal/Emergency (All Platforms)

File: .protocol-enforcer.json (minimal)

{
  "enforced_rules": {
    "require_protocol_steps": [
      { "name": "acknowledge", "hook": "pre_tool_use" }
    ],
    "require_checklist_confirmation": true,
    "minimum_checklist_items": 1
  },
  "checklist_items": [
    { "text": "I acknowledge this change", "hook": "pre_tool_use" }
  ]
}

Use: Emergency fixes, rapid prototyping only.


Platform Comparison Table

Feature Claude Code Cursor Cline Zed/Continue
Hooks Available All 5 PreToolUse + PostToolUse PostToolUse None
Automatic Blocking ✅ Yes ✅ Yes ❌ No ❌ No
Recommended Steps 5-7 steps 3-5 steps 2-3 steps 1-2 steps
Enforcement Level Maximum Standard Audit Voluntary
Best For Production Development Code review Minimal

Appendix C: Hook Scripts Reference

All 5 hook scripts for creating in .cursor/hooks/ or .cline/hooks/.

1. pre-tool-use.cjs

Blocks unauthorized file operations without valid tokens. Two-token verification system ensures both informed_reasoning and protocol compliance.

Updated for Claude Code v2.1.7+ (Jan 2026):

#!/usr/bin/env node
const fs = require('fs');
const path = require('path');
const os = require('os');

let input = '';
process.stdin.on('data', chunk => input += chunk);
process.stdin.on('end', () => {
  const tokenFile = path.join(os.homedir(), '.protocol-enforcer-token');
  const reasoningTokenFile = path.join(os.homedir(), '.protocol-informed-reasoning-token');

  // CHECK 1: Verify informed_reasoning was called
  if (!fs.existsSync(reasoningTokenFile)) {
    const response = {
      hookSpecificOutput: {
        hookEventName: "PreToolUse",
        permissionDecision: "deny",
        permissionDecisionReason: "⛔ protocol-enforcer: must call informed_reasoning (analyze phase) first"
      }
    };
    console.log(JSON.stringify(response));
    process.stderr.write('\n⛔ protocol-enforcer: informed_reasoning not called\n');
    process.exit(0);
  }

  // CHECK 2: Verify protocol compliance authorization
  if (!fs.existsSync(tokenFile)) {
    const response = {
      hookSpecificOutput: {
        hookEventName: "PreToolUse",
        permissionDecision: "deny",
        permissionDecisionReason: "⛔ protocol-enforcer: call mcp__protocol-enforcer__authorize_file_operation"
      }
    };
    console.log(JSON.stringify(response));
    process.stderr.write('\n⛔ protocol-enforcer: operation not authorized\n');
    process.exit(0);
  }

  // Both tokens exist - consume them and allow
  try {
    fs.unlinkSync(reasoningTokenFile);
    fs.unlinkSync(tokenFile);

    const response = {
      hookSpecificOutput: {
        hookEventName: "PreToolUse",
        permissionDecision: "allow",
        permissionDecisionReason: "✅ protocol-enforcer: all requirements met"
      }
    };
    console.log(JSON.stringify(response));
    process.stderr.write('✅ protocol-enforcer: operation authorized (protocol + informed_reasoning verified)\n');
    process.exit(0);
  } catch (e) {
    const response = {
      hookSpecificOutput: {
        hookEventName: "PreToolUse",
        permissionDecision: "deny",
        permissionDecisionReason: `⛔ protocol-enforcer: token error - ${e.message}`
      }
    };
    console.log(JSON.stringify(response));
    process.stderr.write(`\n⛔ protocol-enforcer: token consumption failed - ${e.message}\n`);
    process.exit(0);
  }
});

2. post-tool-use.cjs

Logs successful operations to audit trail.

#!/usr/bin/env node
const fs = require('fs');
const path = require('path');
const os = require('os');

let input = '';
process.stdin.on('data', chunk => input += chunk);
process.stdin.on('end', () => {
  try {
    const hookData = JSON.parse(input);
    const logFile = path.join(os.homedir(), '.protocol-enforcer-audit.log');

    const logEntry = {
      timestamp: new Date().toISOString(),
      tool: hookData.toolName || 'unknown',
      session: hookData.sessionId || 'unknown',
      success: true
    };

    fs.appendFileSync(logFile, JSON.stringify(logEntry) + '\n', 'utf8');
    process.exit(0);
  } catch (e) {
    process.exit(0); // Silent fail - don't block on logging errors
  }
});

3. user-prompt-submit.cjs

Enforces CRITICAL OVERRIDE RULES, blocks bypass attempts.

#!/usr/bin/env node
const fs = require('fs');
const path = require('path');

let input = '';
process.stdin.on('data', chunk => input += chunk);
process.stdin.on('end', () => {
  try {
    const hookData = JSON.parse(input);
    const userPrompt = hookData.userPrompt || '';

    // Detect bypass attempts
    const bypassPatterns = [
      /ignore.*protocol/i,
      /skip.*verification/i,
      /bypass.*enforcer/i,
      /disable.*mcp/i
    ];

    for (const pattern of bypassPatterns) {
      if (pattern.test(userPrompt)) {
        process.stderr.write('⛔ BYPASS ATTEMPT DETECTED: Protocol enforcement cannot be disabled.\n');
        process.exit(2); // Block
      }
    }

    // Inject protocol reminder for file operations
    if (/write|edit|create|modify/i.test(userPrompt)) {
      const reminder = '\n\n[PROTOCOL REMINDER: Before file operations, call mcp__protocol-enforcer__verify_protocol_compliance and mcp__protocol-enforcer__authorize_file_operation]';
      console.log(JSON.stringify({
        userPrompt: userPrompt + reminder
      }));
    } else {
      console.log(input); // Pass through unchanged
    }

    process.exit(0);
  } catch (e) {
    console.log(input); // Pass through on error
    process.exit(0);
  }
});

4. session-start.cjs

Initializes compliance tracking, displays protocol requirements.

#!/usr/bin/env node
const fs = require('fs');
const path = require('path');

let input = '';
process.stdin.on('data', chunk => input += chunk);
process.stdin.on('end', () => {
  try {
    // Load .protocol-enforcer.json
    const cwd = process.cwd();
    const configPath = path.join(cwd, '.protocol-enforcer.json');

    if (fs.existsSync(configPath)) {
      const config = JSON.parse(fs.readFileSync(configPath, 'utf8'));

      console.error('\n📋 Protocol Enforcer Active\n');
      console.error('Required Protocol Steps:');
      config.enforced_rules.require_protocol_steps.forEach(step => {
        console.error(`  - ${step.name} (hook: ${step.hook})`);
      });
      console.error(`\nMinimum Checklist Items: ${config.enforced_rules.minimum_checklist_items}\n`);
    }

    process.exit(0);
  } catch (e) {
    process.exit(0); // Silent fail
  }
});

5. stop.cjs

Generates compliance report at end of response.

#!/usr/bin/env node
const fs = require('fs');
const path = require('path');
const os = require('os');

let input = '';
process.stdin.on('data', chunk => input += chunk);
process.stdin.on('end', () => {
  try {
    // Check for unused tokens
    const tokenFile = path.join(os.homedir(), '.protocol-enforcer-token');

    if (fs.existsSync(tokenFile)) {
      console.error('\n⚠️  Unused authorization token detected - was file operation skipped?\n');
      fs.unlinkSync(tokenFile); // Cleanup
    }

    // Read audit log for session summary
    const logFile = path.join(os.homedir(), '.protocol-enforcer-audit.log');

    if (fs.existsSync(logFile)) {
      const logs = fs.readFileSync(logFile, 'utf8').trim().split('\n');
      const recentLogs = logs.slice(-10); // Last 10 operations

      console.error('\n📊 Session Compliance Summary:');
      console.error(`Total operations logged: ${recentLogs.length}`);
    }

    process.exit(0);
  } catch (e) {
    process.exit(0); // Silent fail
  }
});

Appendix D: Context Persistence Hook Scripts

Enhanced hooks for preserving session state across context resets and compactions.

These hooks work together to create a robust context persistence system with validation, cleanup, and error logging.


1. pre-compact-handoff.cjs

Auto-saves session state before context compaction with validation and error logging.

#!/usr/bin/env node
/**
 * PreCompact Hook - Auto-save handoff before context compaction
 * Preserves session state in JSON format (no dependencies required)
 */

const fs = require('fs');
const path = require('path');

// Validation function
function validateHandoff(handoff) {
  const errors = [];

  // Check required fields
  if (!handoff.date) errors.push('Missing date');
  if (!handoff.session_id) errors.push('Missing session_id');
  if (!handoff.status) errors.push('Missing status');

  // Check tasks structure
  if (!handoff.tasks || typeof handoff.tasks !== 'object') {
    errors.push('Missing or invalid tasks object');
  } else {
    if (!Array.isArray(handoff.tasks.completed)) errors.push('tasks.completed must be array');
    if (!Array.isArray(handoff.tasks.in_progress)) errors.push('tasks.in_progress must be array');
    if (!Array.isArray(handoff.tasks.pending)) errors.push('tasks.pending must be array');
    if (!Array.isArray(handoff.tasks.blockers)) errors.push('tasks.blockers must be array');
  }

  // Check decisions and next_steps
  if (!Array.isArray(handoff.decisions)) errors.push('decisions must be array');
  if (!Array.isArray(handoff.next_steps)) errors.push('next_steps must be array');

  return { valid: errors.length === 0, errors };
}

// Error logging function
function logError(handoffDir, hookName, error) {
  try {
    const logPath = path.join(handoffDir, 'errors.log');
    const timestamp = new Date().toISOString();
    const logEntry = `[${timestamp}] [${hookName}] ${error}\n`;
    fs.appendFileSync(logPath, logEntry, 'utf8');
  } catch (e) {
    // Silent fail on logging error
  }
}

let input = '';
process.stdin.on('data', chunk => input += chunk);
process.stdin.on('end', () => {
  try {
    const hookData = JSON.parse(input);
    const sessionId = hookData.sessionId || 'unknown';
    const trigger = hookData.trigger || 'unknown';
    const cwd = hookData.cwd || process.cwd();

    // Create handoff directory
    const handoffDir = path.join(cwd, '.claude', 'handoffs');
    if (!fs.existsSync(handoffDir)) {
      fs.mkdirSync(handoffDir, { recursive: true });
    }

    // Create handoff document
    const handoff = {
      date: new Date().toISOString(),
      session_id: sessionId,
      trigger: trigger,
      summary: "Session state preserved before compaction",
      status: "in_progress",
      context: {
        cwd: cwd,
        compaction_type: trigger
      },
      tasks: {
        completed: [],
        in_progress: [],
        pending: [],
        blockers: []
      },
      decisions: [],
      next_steps: []
    };

    // Validate handoff
    const validation = validateHandoff(handoff);
    if (!validation.valid) {
      logError(handoffDir, 'PreCompact', `Validation failed: ${validation.errors.join(', ')}`);
      console.error(`\n⚠️  PreCompact: Handoff validation failed (${validation.errors.length} errors) - saving anyway`);
    }

    // Save handoff
    const timestamp = new Date().toISOString().replace(/[:.]/g, '-');
    const filename = `${sessionId}_${timestamp}.json`;
    const handoffPath = path.join(handoffDir, filename);

    fs.writeFileSync(handoffPath, JSON.stringify(handoff, null, 2), 'utf8');

    // Update latest.json reference
    const latestPath = path.join(handoffDir, 'latest.json');
    fs.writeFileSync(latestPath, JSON.stringify({
      latest_handoff: filename,
      created: handoff.date,
      session_id: sessionId
    }, null, 2), 'utf8');

    // Write to stderr for visibility
    console.error(`\n📋 PreCompact: Handoff saved to ${filename}`);
    console.error(`   Trigger: ${trigger}`);
    if (!validation.valid) {
      console.error(`   Validation: ${validation.errors.length} issues detected`);
    }

    process.exit(0);
  } catch (e) {
    const cwd = process.cwd();
    const handoffDir = path.join(cwd, '.claude', 'handoffs');
    logError(handoffDir, 'PreCompact', `Exception: ${e.message}`);
    console.error(`\n⚠️  PreCompact hook error: ${e.message}`);
    process.exit(0);
  }
});

2. session-end-handoff.cjs

Creates final handoff with cleanup and validation when session ends.

#!/usr/bin/env node
/**
 * SessionEnd Hook - Create final handoff and cleanup
 * Captures session outcome and prepares for next session
 */

const fs = require('fs');
const path = require('path');

// Validation function
function validateHandoff(handoff) {
  const errors = [];

  // Check required fields
  if (!handoff.date) errors.push('Missing date');
  if (!handoff.session_id) errors.push('Missing session_id');
  if (!handoff.status) errors.push('Missing status');

  // Check tasks structure
  if (!handoff.tasks || typeof handoff.tasks !== 'object') {
    errors.push('Missing or invalid tasks object');
  } else {
    if (!Array.isArray(handoff.tasks.completed)) errors.push('tasks.completed must be array');
    if (!Array.isArray(handoff.tasks.in_progress)) errors.push('tasks.in_progress must be array');
    if (!Array.isArray(handoff.tasks.pending)) errors.push('tasks.pending must be array');
    if (!Array.isArray(handoff.tasks.blockers)) errors.push('tasks.blockers must be array');
  }

  // Check decisions and next_steps
  if (!Array.isArray(handoff.decisions)) errors.push('decisions must be array');
  if (!Array.isArray(handoff.next_steps)) errors.push('next_steps must be array');

  return { valid: errors.length === 0, errors };
}

// Cleanup function - keep last 10 handoff files
function cleanupOldHandoffs(handoffDir) {
  try {
    const files = fs.readdirSync(handoffDir)
      .filter(f => f.endsWith('.json') && f !== 'latest.json')
      .sort()
      .reverse();

    if (files.length > 10) {
      const toDelete = files.slice(10);
      toDelete.forEach(f => {
        fs.unlinkSync(path.join(handoffDir, f));
      });
      return toDelete.length;
    }
    return 0;
  } catch (e) {
    return 0;
  }
}

// Error logging function
function logError(handoffDir, hookName, error) {
  try {
    const logPath = path.join(handoffDir, 'errors.log');
    const timestamp = new Date().toISOString();
    const logEntry = `[${timestamp}] [${hookName}] ${error}\n`;
    fs.appendFileSync(logPath, logEntry, 'utf8');
  } catch (e) {
    // Silent fail on logging error
  }
}

let input = '';
process.stdin.on('data', chunk => input += chunk);
process.stdin.on('end', () => {
  try {
    const hookData = JSON.parse(input);
    const sessionId = hookData.sessionId || 'unknown';
    const reason = hookData.reason || 'unknown';
    const cwd = hookData.cwd || process.cwd();

    // Create handoff directory
    const handoffDir = path.join(cwd, '.claude', 'handoffs');
    if (!fs.existsSync(handoffDir)) {
      fs.mkdirSync(handoffDir, { recursive: true });
    }

    // Read latest handoff if exists
    const latestPath = path.join(handoffDir, 'latest.json');
    let existingHandoff = {
      tasks: { completed: [], in_progress: [], pending: [], blockers: [] },
      decisions: [],
      next_steps: []
    };

    if (fs.existsSync(latestPath)) {
      try {
        const latestInfo = JSON.parse(fs.readFileSync(latestPath, 'utf8'));
        if (latestInfo.latest_handoff) {
          const existingPath = path.join(handoffDir, latestInfo.latest_handoff);
          if (fs.existsSync(existingPath)) {
            existingHandoff = JSON.parse(fs.readFileSync(existingPath, 'utf8'));
          }
        }
      } catch (e) {
        logError(handoffDir, 'SessionEnd', `Error reading existing handoff: ${e.message}`);
      }
    }

    // Create final handoff
    const handoff = {
      date: new Date().toISOString(),
      session_id: sessionId,
      session_end_reason: reason,
      status: "completed",
      context: {
        cwd: cwd,
        ended_at: new Date().toISOString()
      },
      tasks: existingHandoff.tasks,
      decisions: existingHandoff.decisions,
      next_steps: existingHandoff.next_steps,
      notes: "Session ended. Review tasks and update handoff manually if needed."
    };

    // Validate handoff
    const validation = validateHandoff(handoff);
    if (!validation.valid) {
      logError(handoffDir, 'SessionEnd', `Validation failed: ${validation.errors.join(', ')}`);
      console.error(`\n⚠️  SessionEnd: Handoff validation failed (${validation.errors.length} errors) - saving anyway`);
    }

    // Save final handoff
    const timestamp = new Date().toISOString().replace(/[:.]/g, '-');
    const filename = `${sessionId}_final_${timestamp}.json`;
    const handoffPath = path.join(handoffDir, filename);

    fs.writeFileSync(handoffPath, JSON.stringify(handoff, null, 2), 'utf8');

    // Update latest reference
    fs.writeFileSync(latestPath, JSON.stringify({
      latest_handoff: filename,
      created: handoff.date,
      session_id: sessionId,
      final: true
    }, null, 2), 'utf8');

    // Cleanup old handoffs
    const deletedCount = cleanupOldHandoffs(handoffDir);
    if (deletedCount > 0) {
      console.error(`   Cleanup: Deleted ${deletedCount} old handoff file(s)`);
    }

    // Write to stderr for visibility
    console.error(`\n📋 SessionEnd: Final handoff saved to ${filename}`);
    console.error(`   Reason: ${reason}`);
    if (!validation.valid) {
      console.error(`   Validation: ${validation.errors.length} issues detected`);
    }
    console.error(`   Review: .claude/handoffs/${filename}`);

    process.exit(0);
  } catch (e) {
    const cwd = process.cwd();
    const handoffDir = path.join(cwd, '.claude', 'handoffs');
    logError(handoffDir, 'SessionEnd', `Exception: ${e.message}`);
    console.error(`\n⚠️  SessionEnd hook error: ${e.message}`);
    process.exit(0);
  }
});

3. session-start-handoff.cjs

Loads previous handoff with validation and graceful degradation.

#!/usr/bin/env node
/**
 * SessionStart Hook - Load previous handoff into context
 * Provides continuity by injecting previous session state
 */

const fs = require('fs');
const path = require('path');

// Validation function
function validateHandoff(handoff) {
  const errors = [];

  // Check required fields
  if (!handoff.date) errors.push('Missing date');
  if (!handoff.session_id) errors.push('Missing session_id');
  if (!handoff.status) errors.push('Missing status');

  // Check tasks structure
  if (!handoff.tasks || typeof handoff.tasks !== 'object') {
    errors.push('Missing or invalid tasks object');
  } else {
    if (!Array.isArray(handoff.tasks.completed)) errors.push('tasks.completed must be array');
    if (!Array.isArray(handoff.tasks.in_progress)) errors.push('tasks.in_progress must be array');
    if (!Array.isArray(handoff.tasks.pending)) errors.push('tasks.pending must be array');
    if (!Array.isArray(handoff.tasks.blockers)) errors.push('tasks.blockers must be array');
  }

  // Check decisions and next_steps
  if (!Array.isArray(handoff.decisions)) errors.push('decisions must be array');
  if (!Array.isArray(handoff.next_steps)) errors.push('next_steps must be array');

  return { valid: errors.length === 0, errors };
}

// Error logging function
function logError(handoffDir, hookName, error) {
  try {
    const logPath = path.join(handoffDir, 'errors.log');
    const timestamp = new Date().toISOString();
    const logEntry = `[${timestamp}] [${hookName}] ${error}\n`;
    fs.appendFileSync(logPath, logEntry, 'utf8');
  } catch (e) {
    // Silent fail on logging error
  }
}

let input = '';
process.stdin.on('data', chunk => input += chunk);
process.stdin.on('end', () => {
  try {
    const hookData = JSON.parse(input);
    const cwd = hookData.cwd || process.cwd();

    // Look for latest handoff
    const handoffDir = path.join(cwd, '.claude', 'handoffs');
    const latestPath = path.join(handoffDir, 'latest.json');

    if (!fs.existsSync(latestPath)) {
      console.log(JSON.stringify({
        userPrompt: "\n\n[NO PREVIOUS SESSION - Starting fresh]"
      }));
      process.exit(0);
      return;
    }

    // Read latest handoff reference
    const latestInfo = JSON.parse(fs.readFileSync(latestPath, 'utf8'));
    if (!latestInfo.latest_handoff) {
      console.log(JSON.stringify({
        userPrompt: "\n\n[NO PREVIOUS SESSION - Starting fresh]"
      }));
      process.exit(0);
      return;
    }

    // Read handoff document
    const handoffPath = path.join(handoffDir, latestInfo.latest_handoff);
    if (!fs.existsSync(handoffPath)) {
      logError(handoffDir, 'SessionStart', `Handoff file not found: ${handoffPath}`);
      console.log(JSON.stringify({
        userPrompt: "\n\n[PREVIOUS HANDOFF NOT FOUND - Starting fresh]"
      }));
      process.exit(0);
      return;
    }

    const handoff = JSON.parse(fs.readFileSync(handoffPath, 'utf8'));

    // Validate handoff
    const validation = validateHandoff(handoff);
    if (!validation.valid) {
      logError(handoffDir, 'SessionStart', `Validation failed: ${validation.errors.join(', ')}`);
      console.error(`\n⚠️  SessionStart: Handoff validation failed (${validation.errors.length} errors) - loading anyway`);
    }

    // Format handoff for injection
    let contextInjection = `

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📋 PREVIOUS SESSION HANDOFF
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

**Session ID:** ${handoff.session_id}
**Date:** ${handoff.date}
**Status:** ${handoff.status}
${handoff.session_end_reason ? `**End Reason:** ${handoff.session_end_reason}` : ''}

## Tasks

### Completed
${handoff.tasks.completed.length > 0 ? handoff.tasks.completed.map(t => `- [x] ${t}`).join('\n') : '- None'}

### In Progress
${handoff.tasks.in_progress.length > 0 ? handoff.tasks.in_progress.map(t => `- [ ] ${t}`).join('\n') : '- None'}

### Pending
${handoff.tasks.pending.length > 0 ? handoff.tasks.pending.map(t => `- [ ] ${t}`).join('\n') : '- None'}

### Blockers
${handoff.tasks.blockers.length > 0 ? handoff.tasks.blockers.map(b => `- ⚠️ ${b}`).join('\n') : '- None'}

## Key Decisions
${handoff.decisions.length > 0 ? handoff.decisions.map(d => `- ${d}`).join('\n') : '- None documented'}

## Next Steps
${handoff.next_steps.length > 0 ? handoff.next_steps.map((s, i) => `${i + 1}. ${s}`).join('\n') : '- Not specified'}

${handoff.notes ? `\n## Notes\n${handoff.notes}` : ''}

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

**CONTINUITY MODE ACTIVE** - Context preserved from previous session.
To update this handoff, modify: .claude/handoffs/${latestInfo.latest_handoff}

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
`;

    // Add validation warning if needed
    if (!validation.valid) {
      contextInjection += `\n\n⚠️ **Warning:** Handoff validation detected ${validation.errors.length} issue(s):\n${validation.errors.map(e => `- ${e}`).join('\n')}\n`;
    }

    // Output to stdout
    console.log(JSON.stringify({
      userPrompt: contextInjection
    }));

    // Write to stderr for visibility
    console.error(`\n📋 SessionStart: Loaded handoff from ${latestInfo.latest_handoff}`);
    if (!validation.valid) {
      console.error(`   Validation: ${validation.errors.length} issues detected`);
    }

    process.exit(0);
  } catch (e) {
    const cwd = process.cwd();
    const handoffDir = path.join(cwd, '.claude', 'handoffs');
    logError(handoffDir, 'SessionStart', `Exception: ${e.message}`);
    console.error(`\n⚠️  SessionStart hook error: ${e.message}`);
    console.log(JSON.stringify({
      userPrompt: "\n\n[ERROR LOADING PREVIOUS SESSION - Starting fresh]"
    }));
    process.exit(0);
  }
});

Usage Notes

These context persistence hooks:

  • Work independently of protocol enforcement
  • Can be used with or without protocol-enforcer MCP server
  • Provide robust error handling and validation
  • Maintain non-blocking behavior (never prevent compaction/session operations)
  • Create structured JSON handoffs in .claude/handoffs/
  • Automatically cleanup old handoffs (keep last 10)
  • Log errors to .claude/handoffs/errors.log for debugging

Known Limitation: Hooks cannot access conversation history or internal state. Handoff files will have empty arrays for tasks/decisions/next_steps. Workaround: manually edit handoff files before session ends, or use project documentation (CLAUDE.md) for manual state tracking.

Installation:

  1. Create .cursor/hooks/ directory
  2. Save these scripts with .cjs extension
  3. Make executable: chmod +x .cursor/hooks/*.cjs
  4. Configure in .claude/settings.json (see Appendix C for format)

License

MIT License - Copyright (c) 2025 Jason Lusk

{
"enforced_rules": {
"require_protocol_steps": [
{
"name": "sequential_thinking",
"hook": "user_prompt_submit"
},
{
"name": "analyze_behavior",
"hook": "user_prompt_submit"
},
{
"name": "execute_chores_audit",
"hook": "pre_tool_use"
}
],
"require_checklist_confirmation": true,
"minimum_checklist_items": 12
},
"checklist_items": [
{
"text": "Sequential thinking completed FIRST",
"hook": "user_prompt_submit"
},
{
"text": "Pre-output verification protocol executed (draft reviewed against CHORES)",
"hook": "pre_tool_use"
},
{
"text": "C - Constraint: All constraints identified, ambiguities clarified, prerequisites verified",
"hook": "pre_tool_use"
},
{
"text": "H - Hallucination: Only observable facts stated, no 'likely/probably' language, assumptions validated",
"hook": "pre_tool_use"
},
{
"text": "O - Overreach: Only explicit request addressed, smallest safe change, no scope creep",
"hook": "pre_tool_use"
},
{
"text": "R - Reasoning: Logical steps shown, inferences validated, multiple causes considered",
"hook": "pre_tool_use"
},
{
"text": "E - Ethics/Safety: Security vulnerabilities checked, destructive operations verified",
"hook": "pre_tool_use"
},
{
"text": "S - Sycophancy: Technical accuracy prioritized, assumptions challenged, no excessive praise",
"hook": "pre_tool_use"
},
{
"text": "Meta-Behavior: Instructions executed not explained, frameworks used as checklists not descriptions",
"hook": "pre_tool_use"
},
{
"text": "Communication: No emojis, no timelines, no unnecessary comments, direct output",
"hook": "pre_tool_use"
},
{
"text": "Code Quality: No backwards-compatibility hacks, no premature abstractions, self-documenting code",
"hook": "pre_tool_use"
},
{
"text": "Active verification performed (not passive acknowledgment)",
"hook": "post_tool_use"
},
{
"text": "Professional objectivity maintained (facts over validation)",
"hook": "post_tool_use"
}
]
}
{
"enforced_rules": {
"require_protocol_steps": [
{
"name": "sequential_thinking",
"hook": "user_prompt_submit"
},
{
"name": "task_initiation",
"hook": "user_prompt_submit"
},
{
"name": "pre_planning_analysis",
"hook": "pre_tool_use",
"applies_to": ["Write", "Edit"]
},
{
"name": "plan_generation",
"hook": "pre_tool_use",
"applies_to": ["Write", "Edit"]
}
],
"require_checklist_confirmation": true,
"minimum_checklist_items": 5
},
"checklist_items": [
{
"text": "Pre-response compliance audit completed",
"hook": "user_prompt_submit"
},
{
"text": "Sequential thinking completed FIRST (mcp__clear-thought__sequentialthinking)",
"hook": "user_prompt_submit"
},
{
"text": "User request restated with ALL ambiguities clarified",
"hook": "user_prompt_submit"
},
{
"text": "Requirements gathered (tickets, designs, data sources)",
"hook": "pre_tool_use"
},
{
"text": "Existing patterns analyzed for reuse (components, hooks, utils)",
"hook": "pre_tool_use"
},
{
"text": "Dependencies identified (files to modify/create, reusable code)",
"hook": "pre_tool_use"
},
{
"text": "PLAN comprehensive (objective, files, dependencies, tools, confidence, risks)",
"hook": "pre_tool_use",
"applies_to": ["Write", "Edit"]
},
{
"text": "PLAN confirmed by user BEFORE executing code",
"hook": "pre_tool_use",
"applies_to": ["Write", "Edit"]
},
{
"text": "Technical precision maintained (no 'likely' explanations)",
"hook": "pre_tool_use"
},
{
"text": "Pattern matching verified (file structure, naming, code style)",
"hook": "pre_tool_use",
"applies_to": ["Write", "Edit"]
},
{
"text": "Reuse verified (searched before creating)",
"hook": "pre_tool_use",
"applies_to": ["Write", "Edit"]
},
{
"text": "TypeScript strict mode (no 'any' without justification)",
"hook": "pre_tool_use",
"applies_to": ["Write", "Edit"]
},
{
"text": "Linting passed",
"hook": "post_tool_use",
"applies_to": ["Write", "Edit"]
},
{
"text": "Type check passed",
"hook": "post_tool_use",
"applies_to": ["Write", "Edit"]
},
{
"text": "No console.log() statements",
"hook": "post_tool_use",
"applies_to": ["Write", "Edit"]
},
{
"text": "Import order correct",
"hook": "post_tool_use",
"applies_to": ["Write", "Edit"]
},
{
"text": "Acceptance criteria verified",
"hook": "post_tool_use"
},
{
"text": "Completion summarized with deviations justified",
"hook": "post_tool_use"
}
]
}
{
"$schema": "https://raw.githubusercontent.com/mpalpha/protocol-enforcer-mcp/main/schema.json",
"version": "2.0.0",
"project_name": "Your Project Name",
"description": "Foundation strategies - optimal balance of enforcement and efficiency",
"strategies_enabled": {
"session_tokens": true,
"extended_enforcement": true,
"macro_gates": true,
"fast_track_mode": true,
"unified_tokens": true
},
"security": {
"session_token_ttl_ms": 600000,
"session_token_ttl_description": "10 minutes (MCP recommended)",
"allow_extended_ttl": false,
"extended_ttl_warning": "Extended TTL (>10min) increases security risk. Only enable for trusted environments."
},
"enforced_rules": {
"require_protocol_steps": [
{
"name": "informed_reasoning_analyze",
"description": "Use informed_reasoning before ANY tool",
"hook": "pre_tool_use",
"applies_to": ["Write", "Edit", "WebSearch", "Grep", "Task"]
}
],
"require_checklist_confirmation": true,
"minimum_checklist_items": 3,
"blocking_mode": "strict"
},
"checklist_items": [
{
"text": "ANALYZE: Used informed_reasoning, clarified requirements, reviewed relevant documentation/designs (if applicable), searched codebase",
"hook": "pre_tool_use",
"category": "critical",
"gate": "ANALYZE"
},
{
"text": "PLAN: Analyzed dependencies, generated PLAN with confidence/risks, received user confirmation",
"hook": "pre_tool_use",
"category": "critical",
"gate": "PLAN"
},
{
"text": "VALIDATE: Ran build/lint/type-check, verified acceptance criteria",
"hook": "post_tool_use",
"category": "critical",
"gate": "VALIDATE"
}
],
"fast_track_mode": {
"enabled": true,
"low_complexity_criteria": {
"single_line_edits": true,
"comment_only_changes": true,
"string_literal_changes": true
},
"always_full_protocol": {
"critical_paths": [
".env",
".cursor/hooks/",
".protocol-enforcer.json"
]
}
}
}
{
"enforced_rules": {
"require_protocol_steps": [
{
"name": "sequential_thinking",
"hook": "user_prompt_submit"
},
{
"name": "planning",
"hook": "pre_tool_use",
"applies_to": ["Write", "Edit"]
},
{
"name": "execution",
"hook": "pre_tool_use",
"applies_to": ["Write", "Edit"]
}
],
"require_checklist_confirmation": true,
"minimum_checklist_items": 3
},
"checklist_items": [
{
"text": "Sequential thinking completed FIRST",
"hook": "user_prompt_submit"
},
{
"text": "Task requirements clarified with user",
"hook": "pre_tool_use"
},
{
"text": "Plan created and confirmed before execution",
"hook": "pre_tool_use",
"applies_to": ["Write", "Edit"]
},
{
"text": "Technical precision maintained (state 'I don't know' when uncertain)",
"hook": "pre_tool_use"
},
{
"text": "Zero deflection policy (attempt available tools before claiming unavailable)",
"hook": "pre_tool_use"
},
{
"text": "Completion verified against stated objectives",
"hook": "post_tool_use",
"applies_to": ["Write", "Edit"]
}
]
}
{
"$schema": "https://raw.githubusercontent.com/mpalpha/protocol-enforcer-mcp/main/schema.json",
"version": "1.0.0",
"project_name": "SmartPass Admin UI",
"description": "Protocol enforcement configuration for SmartPass Admin UI development workflow",
"enforced_rules": {
"require_protocol_steps": [
{
"name": "informed_reasoning_required",
"description": "Use mcp__memory-augmented-reasoning__informed_reasoning (analyze phase) before ANY action - enforced at UserPromptSubmit level",
"hook": "user_prompt_submit",
"applies_to": []
},
{
"name": "informed_reasoning_analyze",
"description": "Use mcp__memory-augmented-reasoning__informed_reasoning (analyze phase) before ANY action to get context-aware guidance",
"hook": "pre_tool_use",
"applies_to": ["Write", "Edit"]
},
{
"name": "clarify_requirements",
"description": "Restate user request and identify all ambiguities",
"hook": "pre_tool_use",
"applies_to": ["Write", "Edit"]
},
{
"name": "gather_requirements",
"description": "Review Jira ticket, Figma designs, and existing codebase",
"hook": "pre_tool_use",
"applies_to": ["Write", "Edit"]
},
{
"name": "analyze_dependencies",
"description": "Identify files, components, hooks, utilities, and GraphQL operations",
"hook": "pre_tool_use",
"applies_to": ["Write", "Edit"]
},
{
"name": "generate_plan",
"description": "Create comprehensive PLAN with confidence level and risks",
"hook": "pre_tool_use",
"applies_to": ["Write", "Edit"]
},
{
"name": "await_confirmation",
"description": "Receive explicit PLAN confirmation from user before execution",
"hook": "pre_tool_use",
"applies_to": ["Write", "Edit"]
}
],
"require_checklist_confirmation": true,
"minimum_checklist_items": 8,
"blocking_mode": "strict"
},
"checklist_items": [
{
"text": "Used mcp__memory-augmented-reasoning__informed_reasoning (analyze phase) tool before proceeding",
"hook": "pre_tool_use",
"category": "critical"
},
{
"text": "Identified and clarified all ambiguities in the request",
"hook": "pre_tool_use",
"category": "planning"
},
{
"text": "Reviewed Jira ticket for acceptance criteria and technical notes (if applicable)",
"hook": "pre_tool_use",
"category": "requirements"
},
{
"text": "Checked Figma designs for UI specifications and design tokens (if UI task)",
"hook": "pre_tool_use",
"category": "requirements"
},
{
"text": "Searched src/components/ for reusable components",
"hook": "pre_tool_use",
"category": "codebase_analysis"
},
{
"text": "Searched src/hooks/ for reusable custom hooks",
"hook": "pre_tool_use",
"category": "codebase_analysis"
},
{
"text": "Searched src/utils/ for reusable utilities",
"hook": "pre_tool_use",
"category": "codebase_analysis"
},
{
"text": "Analyzed existing GraphQL operations in src/graphql/operations/",
"hook": "pre_tool_use",
"category": "codebase_analysis"
},
{
"text": "Verified no new dependencies needed (or requested authorization)",
"hook": "pre_tool_use",
"category": "dependencies"
},
{
"text": "Generated comprehensive PLAN with task objective, files, tools, confidence level, risks, and scope statement",
"hook": "pre_tool_use",
"category": "planning"
},
{
"text": "Received explicit PLAN confirmation from user",
"hook": "pre_tool_use",
"category": "critical"
},
{
"text": "Verified scope minimized to explicit request only (no feature creep)",
"hook": "pre_tool_use",
"category": "scope"
},
{
"text": "Matched existing file structure and naming conventions",
"hook": "pre_tool_use",
"category": "patterns"
},
{
"text": "Followed existing code patterns from similar components",
"hook": "pre_tool_use",
"category": "patterns"
},
{
"text": "Ran yarn graphql:codegen after GraphQL operation changes",
"hook": "post_tool_use",
"category": "verification"
},
{
"text": "Ran yarn lint:all and fixed all ESLint errors",
"hook": "post_tool_use",
"category": "verification"
},
{
"text": "Ran yarn type-check and fixed all TypeScript errors",
"hook": "post_tool_use",
"category": "verification"
},
{
"text": "Removed all console.log() statements",
"hook": "post_tool_use",
"category": "code_quality"
},
{
"text": "Removed unnecessary code comments (self-documenting code preferred)",
"hook": "post_tool_use",
"category": "code_quality"
},
{
"text": "Verified against acceptance criteria",
"hook": "post_tool_use",
"category": "validation"
},
{
"text": "Checked development-checklists.md for pattern compliance",
"hook": "post_tool_use",
"category": "validation"
}
],
"behavioral_requirements": {
"MODEL_framework": {
"description": "Behavioral corrections framework from backup rules",
"categories": [
{
"name": "constraint_adherence",
"violations": [
"Adding features beyond explicit request",
"Refactoring code not requested",
"Optimizing prematurely",
"Adding error handling for impossible scenarios",
"Creating abstractions for one-time operations"
]
},
{
"name": "hallucination_prevention",
"violations": [
"Assuming code exists without reading files",
"Guessing file paths or API endpoints",
"Making up TypeScript types",
"Fabricating design token values",
"Inventing GraphQL operations without checking"
]
},
{
"name": "overconfidence_reduction",
"violations": [
"Proceeding without user confirmation on PLAN",
"Not stating 'I don't know' when uncertain",
"Providing 'likely' explanations without verification",
"Not requesting clarification on ambiguities"
]
},
{
"name": "reasoning_transparency",
"violations": [
"Not explaining confidence level in PLAN",
"Not documenting risks identified",
"Not stating scope boundaries explicitly",
"Not justifying deviations from PLAN"
]
},
{
"name": "ethical_alignment",
"violations": [
"Hardcoding credentials in configuration",
"Committing sensitive tokens to version control",
"Not warning about security risks"
]
}
]
}
},
"custom_rules": {
"scss_styling": {
"pre_tool_use": [
"CSS module class names use camelCase (e.g., .passTypeCard)",
"All spacing uses rem units on 2px/4px grid",
"Font styling uses $font-* variables (not font-weight)",
"Use @use instead of deprecated @import",
"Use react-md CSS variables for react-md components"
],
"post_tool_use": [
"Run yarn stylelint:fix",
"No unnecessary CSS comments",
"Border-radius uses .5rem (8px) not 7px"
]
},
"typescript_react": {
"pre_tool_use": [
"Component props defined inline (not separate .types.ts)",
"No React.FC or FC type annotations",
"Import order: React, third-party, components, hooks, utils, GraphQL, types, styles (last)",
"Use type keyword for all type-only imports"
],
"post_tool_use": [
"Run yarn eslint:fix",
"Run yarn type-check",
"Remove unused imports",
"Props alphabetized (callbacks after regular props)"
]
},
"graphql": {
"pre_tool_use": [
"NO exports in src/graphql/operations/ files",
"Use gql(...) without export const"
],
"post_tool_use": [
"Run yarn graphql:codegen after operation changes",
"Import from src/graphql/generated/graphql only",
"Alias all useQuery/useMutation return values"
]
}
},
"tool_permissions": {
"auto_approved_tools": [
"yarn graphql:codegen",
"yarn lint:all",
"yarn eslint:fix",
"yarn stylelint:fix",
"yarn type-check",
"Bash",
"Read",
"Glob",
"Grep"
],
"require_user_approval": [
"yarn add",
"yarn remove",
"git push",
"rm -rf",
"Write",
"Edit"
]
},
"documentation": {
"references": [
".cursor/development-checklists.md",
".cursor/rules/mandatory-supervisor-protocol.mdc",
".cursor/rules/styling.mdc",
".cursor/rules/typescript.mdc",
".cursor/rules/graphql.mdc",
"CLAUDE.md"
],
"mcp_servers_required": [
"mcp__memory-augmented-reasoning__informed_reasoning",
"mcp__memory-bank__*",
"mcp__mcp-atlassian__jira_get_issue",
"mcp__figma__get_design_context"
]
}
}
#!/usr/bin/env node
/**
* Protocol Enforcer MCP Server
* Enforces custom workflow protocol compliance before allowing file operations
*
* Author: Jason Lusk <jason@jasonlusk.com>
* License: MIT
*/
const fs = require('fs');
const path = require('path');
const os = require('os');
const readline = require('readline');
const { v4: uuidv4 } = require('uuid');
// State tracking
const state = {
configPath: null,
config: null,
complianceChecks: [],
operationTokens: new Map(), // Map<token, {expires: timestamp, used: boolean}>
tokenTimeout: 60000, // 60 seconds
sessionTokens: new Map(), // Map<sessionId, {expires: timestamp, created: timestamp}>
sessionTokenTimeout: 3600000, // 60 minutes
toolLogPath: path.join(os.homedir(), '.protocol-tool-log.json')
};
// BUG #20 FIX: Background cleanup of expired session tokens (every 5 minutes)
setInterval(() => {
const now = Date.now();
for (const [sessionId, tokenData] of state.sessionTokens.entries()) {
if (tokenData.expires < now) {
state.sessionTokens.delete(sessionId);
// Clean up file system token
const tokenFile = path.join(os.homedir(), `.protocol-session-${sessionId}.json`);
try {
fs.unlinkSync(tokenFile);
} catch (err) {
// Ignore errors - file may already be cleaned by GC
}
}
}
}, 300000); // 5 minutes
// Tool execution tracking functions
function readToolLog() {
try {
if (fs.existsSync(state.toolLogPath)) {
const data = fs.readFileSync(state.toolLogPath, 'utf8');
return JSON.parse(data);
}
} catch (err) {
console.error('[protocol-enforcer] Error reading tool log:', err.message);
}
return [];
}
function writeToolLog(log) {
try {
fs.writeFileSync(state.toolLogPath, JSON.stringify(log, null, 2), 'utf8');
} catch (err) {
console.error('[protocol-enforcer] Error writing tool log:', err.message);
}
}
function recordToolExecution(toolName, parameters = {}) {
const log = readToolLog();
log.push({
sessionId: process.env.CLAUDE_SESSION_ID || 'unknown',
toolName,
parameters,
timestamp: new Date().toISOString()
});
writeToolLog(log);
}
function getSessionToolCalls(sessionId) {
const log = readToolLog();
return log.filter(entry => entry.sessionId === sessionId);
}
// Default configuration (MODEL Framework CHORES behavioral fixes)
const DEFAULT_CONFIG = {
enforced_rules: {
require_protocol_steps: [
{
name: "analyze_behavior",
hook: "user_prompt_submit"
},
{
name: "apply_chores_fixes",
hook: "pre_tool_use",
applies_to: ["Write", "Edit"]
}
],
require_checklist_confirmation: true,
minimum_checklist_items: 3
},
checklist_items: [
{
text: "Constraint issues addressed (structure/format adherence)",
hook: "pre_tool_use"
},
{
text: "Hallucination issues addressed (no false information)",
hook: "pre_tool_use"
},
{
text: "Overconfidence issues addressed (uncertainty expressed when appropriate)",
hook: "pre_tool_use"
},
{
text: "Reasoning issues addressed (logical consistency verified)",
hook: "pre_tool_use"
},
{
text: "Ethical/Safety issues addressed (harmful content prevented)",
hook: "pre_tool_use"
},
{
text: "Sycophancy issues addressed (truthfulness over false agreement)",
hook: "pre_tool_use"
}
]
};
// Find config file (project scope takes precedence)
function findConfigFile() {
const cwd = process.cwd();
const projectConfig = path.join(cwd, '.protocol-enforcer.json');
const homeConfig = path.join(process.env.HOME || process.env.USERPROFILE, '.protocol-enforcer.json');
if (fs.existsSync(projectConfig)) {
return projectConfig;
}
if (fs.existsSync(homeConfig)) {
return homeConfig;
}
return null;
}
// Load configuration
function loadConfig() {
const configPath = findConfigFile();
if (configPath) {
state.configPath = configPath;
const rawConfig = JSON.parse(fs.readFileSync(configPath, 'utf8'));
state.config = validateConfig(rawConfig);
return state.config;
}
state.config = DEFAULT_CONFIG;
return null;
}
// Tool: initialize_protocol_config
async function initializeProtocolConfig(args) {
const scope = args.scope || 'project';
let configPath;
if (scope === 'project') {
configPath = path.join(process.cwd(), '.protocol-enforcer.json');
} else if (scope === 'user') {
configPath = path.join(process.env.HOME || process.env.USERPROFILE, '.protocol-enforcer.json');
} else {
return {
content: [{
type: 'text',
text: JSON.stringify({ error: 'Invalid scope. Must be "project" or "user".' }, null, 2)
}]
};
}
if (fs.existsSync(configPath)) {
return {
content: [{
type: 'text',
text: JSON.stringify({
error: 'Configuration file already exists',
path: configPath
}, null, 2)
}]
};
}
fs.writeFileSync(configPath, JSON.stringify(DEFAULT_CONFIG, null, 2), 'utf8');
state.configPath = configPath;
state.config = DEFAULT_CONFIG;
return {
content: [{
type: 'text',
text: JSON.stringify({
success: true,
message: `Configuration file created at ${configPath}`,
config: DEFAULT_CONFIG
}, null, 2)
}]
};
}
// Validate config format
function validateConfig(config) {
const errors = [];
// Validate protocol steps
if (config.enforced_rules.require_protocol_steps) {
config.enforced_rules.require_protocol_steps.forEach((step, idx) => {
if (typeof step === 'string') {
errors.push(`Protocol step at index ${idx} is a string. Must be an object with 'name' and 'hook' properties.`);
} else if (!step.name || !step.hook) {
errors.push(`Protocol step at index ${idx} missing required 'name' or 'hook' property.`);
}
});
}
// Validate checklist items
if (config.checklist_items) {
config.checklist_items.forEach((item, idx) => {
if (typeof item === 'string') {
errors.push(`Checklist item at index ${idx} is a string. Must be an object with 'text' and 'hook' properties.`);
} else if (!item.text || !item.hook) {
errors.push(`Checklist item at index ${idx} missing required 'text' or 'hook' property.`);
}
});
}
if (errors.length > 0) {
throw new Error(`Invalid configuration format:\n${errors.join('\n')}\n\nSee README.md Configuration Reference section for correct format.`);
}
return config;
}
// Filter rules by hook and tool name
function filterByHook(items, hook, toolName = null) {
return items.filter(item => {
// Check if hook matches
if (item.hook !== hook) return false;
// Check if tool-specific filtering applies
if (item.applies_to && toolName) {
return item.applies_to.includes(toolName);
}
return true;
});
}
// ┌─────────────────────────────────────────────────────────────────────────┐
// │ PHASE 3: Fast-Track Mode - Classify task complexity │
// └─────────────────────────────────────────────────────────────────────────┘
function classifyTaskComplexity(config, toolName, args = {}) {
// Fast-track disabled - always use full protocol
if (!config.fast_track_mode || !config.fast_track_mode.enabled) {
return { fastTrackEligible: false, reason: 'Fast-track mode disabled' };
}
const criteria = config.fast_track_mode.low_complexity_criteria || {};
const criticalPaths = config.fast_track_mode.always_full_protocol?.critical_paths || [];
// Check if file is in critical paths (always requires full protocol)
const filePath = args.file_path || args.path || '';
if (filePath && criticalPaths.some(path => filePath.includes(path))) {
return { fastTrackEligible: false, reason: `Critical path: ${filePath}` };
}
// Analyze complexity based on criteria
let isLowComplexity = false;
let matchedCriteria = null;
// Single-line edits
if (criteria.single_line_edits && args.old_string && args.new_string) {
const oldLines = args.old_string.split('\n').length;
const newLines = args.new_string.split('\n').length;
if (oldLines === 1 && newLines === 1) {
isLowComplexity = true;
matchedCriteria = 'single_line_edits';
}
}
// Comment-only changes
if (criteria.comment_only_changes && args.old_string && args.new_string) {
const commentPattern = /^\s*(\/\/|\/\*|\*|#)/;
const oldIsComment = args.old_string.split('\n').every(line => !line.trim() || commentPattern.test(line));
const newIsComment = args.new_string.split('\n').every(line => !line.trim() || commentPattern.test(line));
if (oldIsComment || newIsComment) {
isLowComplexity = true;
matchedCriteria = 'comment_only_changes';
}
}
// String literal changes
if (criteria.string_literal_changes && args.old_string && args.new_string) {
const stringPattern = /^[\s'"]+|[\s'"]+$/g;
const oldCore = args.old_string.replace(stringPattern, '');
const newCore = args.new_string.replace(stringPattern, '');
// If structure is identical except for string content
if (oldCore.length > 0 && newCore.length > 0 &&
args.old_string.split('"').length === args.new_string.split('"').length) {
isLowComplexity = true;
matchedCriteria = 'string_literal_changes';
}
}
return {
fastTrackEligible: isLowComplexity,
reason: isLowComplexity ? `Low complexity: ${matchedCriteria}` : 'Standard complexity - requires full protocol'
};
}
// Tool: verify_protocol_compliance
async function verifyProtocolCompliance(args) {
const rawConfig = state.config || loadConfig() || DEFAULT_CONFIG;
// Validate config format (throws if invalid)
const config = validateConfig(rawConfig);
const violations = [];
const hook = args.hook;
const toolName = args.tool_name || null;
if (!hook) {
return {
content: [{
type: 'text',
text: JSON.stringify({
error: 'Missing required parameter: hook. Must specify which hook is calling (e.g., "user_prompt_submit", "pre_tool_use", "post_tool_use").'
}, null, 2)
}]
};
}
// ┌─────────────────────────────────────────────────────────────────────────┐
// │ PHASE 3: Fast-Track Mode - Check if task qualifies for fast-track │
// └─────────────────────────────────────────────────────────────────────────┘
const fastTrackResult = classifyTaskComplexity(config, toolName, args);
const isFastTrack = fastTrackResult.fastTrackEligible;
// Check required protocol steps (filtered by hook)
if (config.enforced_rules.require_protocol_steps && Array.isArray(config.enforced_rules.require_protocol_steps)) {
const allRequiredSteps = config.enforced_rules.require_protocol_steps;
const hookFilteredSteps = filterByHook(allRequiredSteps, hook, toolName);
const completedSteps = args.protocol_steps_completed || [];
const missingSteps = hookFilteredSteps.filter(step => !completedSteps.includes(step.name));
if (missingSteps.length > 0) {
missingSteps.forEach(step => {
violations.push(`VIOLATION: Required protocol step not completed: ${step.name} (hook: ${hook})`);
});
}
}
// Check checklist confirmation (filtered by hook)
if (config.enforced_rules.require_checklist_confirmation) {
const checkedItems = args.checklist_items_checked || [];
const allRequiredItems = config.checklist_items || [];
const hookFilteredItems = filterByHook(allRequiredItems, hook, toolName);
const minItems = config.enforced_rules.minimum_checklist_items || 0;
// Count only items applicable to this hook
// Fast-track: Reduce minimum by 1 if PLAN gate is skipped
let applicableMinItems = Math.min(minItems, hookFilteredItems.length);
if (isFastTrack) {
const planGateItems = hookFilteredItems.filter(item => item.gate === 'PLAN');
applicableMinItems = Math.max(0, applicableMinItems - planGateItems.length);
}
if (checkedItems.length < applicableMinItems) {
violations.push(`VIOLATION: Only ${checkedItems.length} checklist items checked, minimum ${applicableMinItems} required for hook '${hook}'`);
}
// Filter out PLAN gate items if fast-track eligible
let uncheckedRequired = hookFilteredItems.filter(item => !checkedItems.includes(item.text));
if (isFastTrack) {
uncheckedRequired = uncheckedRequired.filter(item => item.gate !== 'PLAN');
}
if (uncheckedRequired.length > 0) {
violations.push(`VIOLATION: Required checklist items not confirmed for hook '${hook}': ${uncheckedRequired.map(i => i.text).join(', ')}`);
}
// ┌─────────────────────────────────────────────────────────────────────────┐
// │ PHASE 2: Macro-Gates - Gate-level validation │
// └─────────────────────────────────────────────────────────────────────────┘
// Group checklist items by gate (if gate field exists)
const gateGroups = {};
hookFilteredItems.forEach(item => {
if (item.gate) {
if (!gateGroups[item.gate]) {
gateGroups[item.gate] = { items: [], checked: [] };
}
gateGroups[item.gate].items.push(item.text);
if (checkedItems.includes(item.text)) {
gateGroups[item.gate].checked.push(item.text);
}
}
});
// Validate gate completion (all items in a gate must be checked)
// Fast-track mode: skip PLAN gate validation for low-complexity tasks
Object.keys(gateGroups).forEach(gateName => {
// Skip PLAN gate if fast-track eligible
if (isFastTrack && gateName === 'PLAN') {
return;
}
const gate = gateGroups[gateName];
if (gate.checked.length < gate.items.length) {
const uncheckedInGate = gate.items.filter(item => !gate.checked.includes(item));
violations.push(`VIOLATION: Gate '${gateName}' incomplete - missing: ${uncheckedInGate.join(', ')}`);
}
});
// ┌─────────────────────────────────────────────────────────────────────────┐
// │ PHASE 4: Cross-reference checklist claims against actual tool calls │
// └─────────────────────────────────────────────────────────────────────────┘
// Get session ID from most recent tool log entry (all tools in current session share same ID)
const toolLog = readToolLog();
const sessionId = toolLog.length > 0 ? toolLog[toolLog.length - 1].sessionId : 'unknown';
const sessionTools = getSessionToolCalls(sessionId);
// Helper: Check if any tool matching pattern was called
const hasToolCall = (pattern) => {
return sessionTools.some(call => {
const toolName = call.toolName || '';
return toolName.toLowerCase().includes(pattern.toLowerCase());
});
};
/**
* Determines if a checklist item is conditional (only applies in certain scenarios)
*
* Conditional items include qualifiers like:
* - "(if applicable)" - Only required when relevant to the task
* - "(if UI task)" - Only required for UI/design work
* - "(if [condition])" - General conditional pattern
* - "(when [condition])" - Alternative conditional pattern
* - "(optional)" - Explicitly optional
*
* @param {string} itemText - The checklist item text
* @returns {boolean} - True if item has conditional qualifier
*/
const isConditionalItem = (itemText) => {
const text = itemText.toLowerCase();
return (
text.includes('(if applicable)') ||
text.includes('(if ui task)') ||
text.includes('(if ') ||
text.includes('(when ') ||
text.includes('(optional)')
);
};
// Verify search claims
const searchClaims = checkedItems.filter(item =>
item.toLowerCase().includes('search') ||
item.toLowerCase().includes('check') && item.toLowerCase().includes('src/')
);
if (searchClaims.length > 0) {
const hasSearch = hasToolCall('glob') || hasToolCall('grep');
if (!hasSearch) {
violations.push(
`VIOLATION: Claimed searches (${searchClaims.length} items) but no Glob/Grep tool calls detected in session. ` +
`Checklist requires actual tool execution, not assumptions.`
);
}
}
// Verify Jira review claims
const jiraClaims = checkedItems.filter(item => {
// Skip enforcement for conditional items (e.g., "if applicable")
if (isConditionalItem(item)) {
return false;
}
const text = item.toLowerCase();
return text.includes('jira') || text.includes('ticket') || text.includes('acceptance criteria');
});
if (jiraClaims.length > 0) {
const hasJira = hasToolCall('jira');
if (!hasJira) {
violations.push(
`VIOLATION: Claimed Jira review but no jira tool calls detected in session. ` +
`Must actually fetch ticket details, not assume requirements.`
);
}
}
// Verify Figma design claims
const figmaClaims = checkedItems.filter(item => {
// Skip enforcement for conditional items (e.g., "if UI task")
if (isConditionalItem(item)) {
return false;
}
const text = item.toLowerCase();
return text.includes('figma') || (text.includes('design') && text.includes('ui'));
});
if (figmaClaims.length > 0) {
const hasFigma = hasToolCall('figma');
if (!hasFigma) {
violations.push(
`VIOLATION: Claimed Figma design review but no figma tool calls detected in session. ` +
`Must actually fetch designs, not assume specifications.`
);
}
}
}
// Record check
state.complianceChecks.push({
timestamp: new Date().toISOString(),
passed: violations.length === 0,
violations: violations,
args: args
});
if (violations.length > 0) {
return {
content: [{
type: 'text',
text: JSON.stringify({
compliant: false,
violations: violations,
message: 'Protocol compliance check FAILED. Fix violations before proceeding.'
}, null, 2)
}]
};
}
// Generate single-use operation token
const crypto = require('crypto');
const token = crypto.randomBytes(32).toString('hex');
const expires = Date.now() + state.tokenTimeout;
state.operationTokens.set(token, { expires, used: false });
// Clean up expired tokens
for (const [key, value] of state.operationTokens.entries()) {
if (value.expires < Date.now() || value.used) {
state.operationTokens.delete(key);
}
}
return {
content: [{
type: 'text',
text: JSON.stringify({
compliant: true,
operation_token: token,
token_expires_in_seconds: state.tokenTimeout / 1000,
message: 'Protocol compliance verified. Use the operation_token with authorize_file_operation before proceeding.'
}, null, 2)
}]
};
}
// Tool: get_compliance_status
async function getComplianceStatus() {
const recentChecks = state.complianceChecks.slice(-10);
const passedCount = recentChecks.filter(c => c.passed).length;
const failedCount = recentChecks.length - passedCount;
return {
content: [{
type: 'text',
text: JSON.stringify({
total_checks: state.complianceChecks.length,
recent_checks: recentChecks.length,
passed: passedCount,
failed: failedCount,
recent_violations: recentChecks
.filter(c => !c.passed)
.map(c => ({ timestamp: c.timestamp, violations: c.violations }))
}, null, 2)
}]
};
}
// Tool: get_protocol_config
async function getProtocolConfig() {
const config = state.config || loadConfig() || DEFAULT_CONFIG;
return {
content: [{
type: 'text',
text: JSON.stringify({
config_path: state.configPath || 'Using default configuration',
config: config
}, null, 2)
}]
};
}
// ┌─────────────────────────────────────────────────────────────────────────┐
// │ PHASE 4: Unified Token Model - Helper functions │
// └─────────────────────────────────────────────────────────────────────────┘
function isUnifiedTokensEnabled(config) {
return config.strategies_enabled && config.strategies_enabled.unified_tokens === true;
}
function createUnifiedToken(sessionId, operationToken, ttlMs) {
const now = Date.now();
const tokenId = uuidv4();
return {
id: tokenId,
session_id: sessionId,
operation_token: operationToken,
created_at: now,
expires_at: now + ttlMs,
version: '2.0'
};
}
function writeUnifiedToken(sessionId, tokenData) {
const tokenFile = path.join(os.homedir(), `.protocol-unified-${sessionId}.json`);
const tempFile = `${tokenFile}.tmp`;
try {
fs.writeFileSync(tempFile, JSON.stringify(tokenData, null, 2), 'utf8');
fs.renameSync(tempFile, tokenFile);
return { success: true, filePath: tokenFile };
} catch (err) {
// Cleanup temp file on error
try {
fs.unlinkSync(tempFile);
} catch (cleanupErr) {
// Ignore cleanup errors
}
return { success: false, error: err.message };
}
}
function readUnifiedToken(sessionId) {
const unifiedFile = path.join(os.homedir(), `.protocol-unified-${sessionId}.json`);
const legacySessionFile = path.join(os.homedir(), `.protocol-session-${sessionId}.json`);
const legacyOperationFile = path.join(os.homedir(), '.protocol-enforcer-token');
// Try unified token first
if (fs.existsSync(unifiedFile)) {
try {
const content = fs.readFileSync(unifiedFile, 'utf8');
const tokenData = JSON.parse(content);
// Validate structure
if (tokenData.session_id === sessionId && tokenData.expires_at > Date.now()) {
return { found: true, type: 'unified', data: tokenData };
}
} catch (err) {
// Invalid unified token, fall through to legacy
}
}
// Fallback to legacy tokens
let sessionToken = null;
let operationToken = null;
if (fs.existsSync(legacySessionFile)) {
try {
const content = fs.readFileSync(legacySessionFile, 'utf8');
sessionToken = JSON.parse(content);
} catch (err) {
// Invalid session token
}
}
if (fs.existsSync(legacyOperationFile)) {
try {
operationToken = fs.readFileSync(legacyOperationFile, 'utf8').trim();
} catch (err) {
// Invalid operation token
}
}
if (sessionToken || operationToken) {
return {
found: true,
type: 'legacy',
data: { session: sessionToken, operation: operationToken }
};
}
return { found: false };
}
// Tool: authorize_file_operation
async function authorizeFileOperation(args) {
const token = args.operation_token;
const sessionId = args.session_id;
if (!token) {
return {
content: [{
type: 'text',
text: JSON.stringify({
authorized: false,
error: 'No operation token provided. You must call verify_protocol_compliance first to obtain a token.'
}, null, 2)
}]
};
}
const tokenData = state.operationTokens.get(token);
if (!tokenData) {
return {
content: [{
type: 'text',
text: JSON.stringify({
authorized: false,
error: 'Invalid or expired operation token. Call verify_protocol_compliance again to obtain a new token.'
}, null, 2)
}]
};
}
if (tokenData.used) {
return {
content: [{
type: 'text',
text: JSON.stringify({
authorized: false,
error: 'Operation token already used. Each token is single-use only. Call verify_protocol_compliance again.'
}, null, 2)
}]
};
}
if (tokenData.expires < Date.now()) {
state.operationTokens.delete(token);
return {
content: [{
type: 'text',
text: JSON.stringify({
authorized: false,
error: 'Operation token expired. Call verify_protocol_compliance again to obtain a new token.'
}, null, 2)
}]
};
}
// Mark token as used
tokenData.used = true;
// Write single-use token file for PreToolUse hook verification (first tool)
const tokenFile = path.join(os.homedir(), '.protocol-enforcer-token');
try {
fs.writeFileSync(tokenFile, token, 'utf8');
} catch (err) {
return {
content: [{
type: 'text',
text: JSON.stringify({
authorized: false,
error: `Failed to write token file: ${err.message}`
}, null, 2)
}]
};
}
// Handle session token creation if session_id provided
if (sessionId && typeof sessionId === 'string' && sessionId.trim() !== '') {
const now = Date.now();
const expires = now + state.sessionTokenTimeout;
const config = state.config || loadConfig() || DEFAULT_CONFIG;
// ┌─────────────────────────────────────────────────────────────────────────┐
// │ PHASE 4: Check if unified tokens are enabled │
// └─────────────────────────────────────────────────────────────────────────┘
if (isUnifiedTokensEnabled(config)) {
// Create unified token (combines operation + session into one file)
const unifiedTokenData = createUnifiedToken(sessionId, token, state.sessionTokenTimeout);
const writeResult = writeUnifiedToken(sessionId, unifiedTokenData);
if (writeResult.success) {
// Store in memory
state.sessionTokens.set(sessionId, {
expires: unifiedTokenData.expires_at,
created: unifiedTokenData.created_at,
unified: true
});
return {
content: [{
type: 'text',
text: JSON.stringify({
authorized: true,
session_token_created: true,
token_type: 'unified',
session_id: sessionId,
session_expires_in_seconds: state.sessionTokenTimeout / 1000,
message: 'File operation authorized. Unified session token created for 60-minute workflow.'
}, null, 2)
}]
};
} else {
// Unified token creation failed, fall back to legacy
// Continue with legacy token creation below
}
}
// Legacy token creation (used when unified tokens disabled OR unified write failed)
const sessionTokenFile = path.join(os.homedir(), `.protocol-session-${sessionId}.json`);
// Check if session token already exists in memory
let existingToken = state.sessionTokens.get(sessionId);
if (existingToken && existingToken.expires > now) {
// BUG #21 FIX: Verify file exists, recreate if missing (memory-file desync)
if (!fs.existsSync(sessionTokenFile)) {
const tokenData = {
session_id: sessionId,
created_at: existingToken.created,
expires_at: existingToken.expires
};
// BUG #1 + #2 FIX: Atomic write (temp + rename)
const tempFile = `${sessionTokenFile}.tmp`;
try {
fs.writeFileSync(tempFile, JSON.stringify(tokenData), 'utf8');
fs.renameSync(tempFile, sessionTokenFile);
} catch (err) {
// Cleanup temp file on error
try {
fs.unlinkSync(tempFile);
} catch (cleanupErr) {
// Ignore cleanup errors
}
// Don't fail - memory token still valid, file will be recreated next time
}
}
// Session token already valid
return {
content: [{
type: 'text',
text: JSON.stringify({
authorized: true,
session_token_created: false,
session_id: sessionId,
session_expires_in_seconds: Math.round((existingToken.expires - now) / 1000),
message: 'File operation authorized. Existing session token still valid.'
}, null, 2)
}]
};
}
// Create new session token
const tokenData = {
session_id: sessionId,
created_at: now,
expires_at: expires
};
// BUG #1 + #2 FIX: Atomic write (temp + rename)
const tempFile = `${sessionTokenFile}.tmp`;
try {
fs.writeFileSync(tempFile, JSON.stringify(tokenData), 'utf8');
fs.renameSync(tempFile, sessionTokenFile);
// Store in memory
state.sessionTokens.set(sessionId, {
expires: expires,
created: now
});
return {
content: [{
type: 'text',
text: JSON.stringify({
authorized: true,
session_token_created: true,
session_id: sessionId,
session_expires_in_seconds: state.sessionTokenTimeout / 1000,
message: 'File operation authorized. Session token created for 60-minute workflow.'
}, null, 2)
}]
};
} catch (err) {
// Cleanup temp file on error
try {
fs.unlinkSync(tempFile);
} catch (cleanupErr) {
// Ignore cleanup errors
}
// Session token creation failed, but single-use token file already written
// Return success but warn about session token failure
return {
content: [{
type: 'text',
text: JSON.stringify({
authorized: true,
session_token_created: false,
session_id: sessionId,
warning: `Failed to create session token: ${err.message}. Single-use token still valid.`,
message: 'File operation authorized. Token file written for hook verification. You may now proceed with Write/Edit operations.'
}, null, 2)
}]
};
}
}
// No session_id provided - single-use token only
return {
content: [{
type: 'text',
text: JSON.stringify({
authorized: true,
session_token_created: false,
message: 'File operation authorized. Token file written for hook verification. You may now proceed with Write/Edit operations.'
}, null, 2)
}]
};
}
// MCP Protocol Handler
const tools = [
{
name: 'initialize_protocol_config',
description: 'Create a new protocol enforcer configuration file at project or user scope',
inputSchema: {
type: 'object',
properties: {
scope: {
type: 'string',
enum: ['project', 'user'],
description: 'Where to create the config file: "project" (.protocol-enforcer.json in current directory) or "user" (~/.protocol-enforcer.json)'
}
},
required: ['scope']
}
},
{
name: 'verify_protocol_compliance',
description: 'Verify that mandatory protocol steps have been completed before allowing file operations. This is a generic tool - protocol steps and checklist items are defined in your .protocol-enforcer.json configuration file. Supports hook-specific filtering.',
inputSchema: {
type: 'object',
properties: {
hook: {
type: 'string',
enum: ['user_prompt_submit', 'session_start', 'pre_tool_use', 'post_tool_use', 'stop'],
description: 'REQUIRED: Which hook is calling this verification (e.g., "pre_tool_use", "user_prompt_submit"). Filters rules to only those applicable to this hook.'
},
tool_name: {
type: 'string',
description: 'Optional: name of the tool being called (e.g., "Write", "Edit"). Used for tool-specific filtering when combined with hook. Only applies when hook is "pre_tool_use" or "post_tool_use".'
},
protocol_steps_completed: {
type: 'array',
items: { type: 'string' },
description: 'List of protocol step names that have been completed (e.g., ["planning", "analysis"]). Step names must match those defined in your .protocol-enforcer.json config.'
},
checklist_items_checked: {
type: 'array',
items: { type: 'string' },
description: 'List of checklist items that were verified. Items should match those defined in your .protocol-enforcer.json config.'
}
},
required: ['hook', 'protocol_steps_completed', 'checklist_items_checked']
}
},
{
name: 'get_compliance_status',
description: 'Get current compliance check statistics and recent violations',
inputSchema: {
type: 'object',
properties: {}
}
},
{
name: 'get_protocol_config',
description: 'Get the current protocol enforcer configuration',
inputSchema: {
type: 'object',
properties: {}
}
},
{
name: 'authorize_file_operation',
description: 'MANDATORY before ANY file write/edit operation. Validates the operation token from verify_protocol_compliance. Optionally creates 60-minute session token for multi-tool workflows.',
inputSchema: {
type: 'object',
properties: {
operation_token: {
type: 'string',
description: 'The operation token received from verify_protocol_compliance. Required for authorization.'
},
session_id: {
type: 'string',
description: 'Optional: Claude Code session ID. If provided, creates a 60-minute session token that allows subsequent tools without repeated protocol flows. Reduces overhead from ~16 calls to ~4 calls per multi-tool workflow.'
}
},
required: []
}
}
];
// Main MCP message handler
async function handleMessage(message) {
const { method, params, id } = message;
// Ensure we have a valid ID (required by JSON-RPC 2.0)
const requestId = (id !== undefined && id !== null) ? id : -1;
switch (method) {
case 'initialize':
return {
jsonrpc: '2.0',
id: requestId,
result: {
protocolVersion: '2024-11-05',
serverInfo: {
name: 'protocol-enforcer',
version: '1.0.0'
},
capabilities: {
tools: {}
}
}
};
case 'tools/list':
return {
jsonrpc: '2.0',
id: requestId,
result: { tools }
};
case 'tools/call':
const { name, arguments: args } = params;
let result;
switch (name) {
case 'initialize_protocol_config':
result = await initializeProtocolConfig(args || {});
break;
case 'verify_protocol_compliance':
result = await verifyProtocolCompliance(args || {});
break;
case 'get_compliance_status':
result = await getComplianceStatus();
break;
case 'get_protocol_config':
result = await getProtocolConfig();
break;
case 'authorize_file_operation':
result = await authorizeFileOperation(args || {});
break;
default:
result = {
content: [{
type: 'text',
text: JSON.stringify({ error: `Unknown tool: ${name}` })
}],
isError: true
};
}
return {
jsonrpc: '2.0',
id: requestId,
result
};
default:
return {
jsonrpc: '2.0',
id: requestId,
error: {
code: -32601,
message: `Method not found: ${method}`
}
};
}
}
// Send JSON-RPC error response
function sendError(id, code, message, data) {
const response = {
jsonrpc: '2.0',
id: id,
error: {
code: code,
message: message,
data: data
}
};
console.error(`[protocol-enforcer] Sending error for id=${id}, code=${code}, message=${message}`);
console.log(JSON.stringify(response));
}
// Stdio transport
async function main() {
loadConfig();
const rl = readline.createInterface({
input: process.stdin,
output: process.stdout,
terminal: false
});
rl.on('line', async (line) => {
try {
// Debug logging: Log incoming JSON for parse error investigation
if (process.env.DEBUG_MCP) {
console.error('[protocol-enforcer] Received line:', line.slice(0, 200) + (line.length > 200 ? '...' : ''));
}
const message = JSON.parse(line);
const response = await handleMessage(message);
console.log(JSON.stringify(response));
} catch (error) {
// For parse errors, try to extract ID from malformed JSON, otherwise use -1
let requestId = -1;
try {
const partialParse = JSON.parse(line);
if (partialParse && partialParse.id !== undefined) {
requestId = partialParse.id;
}
} catch (e) {
// If we can't even partially parse, use -1
}
console.error('[protocol-enforcer] Parse error:', error.message);
console.error('[protocol-enforcer] Problematic line:', line.slice(0, 500));
sendError(requestId, -32700, 'Parse error', error.message);
}
});
// Graceful shutdown when STDIN closes (client disconnects)
rl.on('close', () => {
console.error('[protocol-enforcer] STDIN closed, shutting down gracefully');
process.exit(0);
});
// Handle termination signals
process.on('SIGTERM', () => {
console.error('[protocol-enforcer] Received SIGTERM, shutting down');
process.exit(0);
});
process.on('SIGINT', () => {
console.error('[protocol-enforcer] Received SIGINT, shutting down');
process.exit(0);
});
}
main().catch(console.error);
{
"name": "protocol-enforcer-mcp",
"version": "2.0.1",
"description": "MCP server that enforces mandatory supervisor protocol compliance before allowing file operations",
"author": "Jason Lusk <jason@jasonlusk.com>",
"license": "MIT",
"main": "index.js",
"bin": {
"protocol-enforcer": "./index.js"
},
"engines": {
"node": ">=14.0.0"
},
"dependencies": {
"uuid": "^9.0.0"
},
"keywords": [
"mcp",
"model-context-protocol",
"protocol-enforcer",
"ai-assistant",
"code-quality",
"compliance"
]
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment