A Model Context Protocol (MCP) server that enforces custom workflow protocols before allowing AI agents to perform file operations.
Author: Jason Lusk jason@jasonlusk.com License: MIT Gist URL: https://gist.github.com/mpalpha/c2f1723868c86343e590ed38e80f264d
Universal gatekeeper for AI coding assistants supporting Model Context Protocol:
- ✅ Works with any MCP-compatible client (Claude Code, Cursor, Cline, Zed, Continue)
- ✅ Enforces custom protocol steps before planning/coding
- ✅ Tracks required checklist items specific to your project
- ✅ Records compliance violations over time
- ✅ Fully configurable - adapt to any workflow
- ✅ Runs from npx - no installation needed
Fixed: Deadlock in authorize_file_operation parameter validation
- Changed
operation_tokenfrom required to optional in MCP schema - Function now handles missing token gracefully with helpful error message
- Resolves UX issue where hook error message was misleading users
- Impact: Users now get clear guidance: "No operation token provided. You must call verify_protocol_compliance first to obtain a token."
Version 2.0.0 introduces Foundation Strategies - a comprehensive optimization framework that reduces protocol overhead by 66-71% while maintaining strict enforcement.
🚀 Performance Optimization
- Session Tokens: Multi-tool workflows without repeated protocol flows (60-minute TTL)
- Macro-Gates: 21 checklist items → 3 gates (ANALYZE, PLAN, VALIDATE)
- Fast-Track Mode: Low-complexity tasks skip PLAN gate automatically
- Unified Tokens: Single token file replaces 3 separate files
📉 Overhead Reduction
- Baseline (v1.0): ~16 tool calls per workflow
- Foundation (v2.0): ~5.3 tool calls per workflow
- Total Reduction: 66-71% fewer protocol interactions
⚙️ New Configuration Presets
config.foundation.json- All optimizations enabled (recommended)- Backward compatible with v1.0 configurations
- Fast-track criteria: single-line edits, comment changes, string literals
uuid@^9.0.0 (Phase 4: Unified Tokens)
- First
npxinstall will download uuid from npm - Pure JavaScript, no native compilation
- If you prefer zero dependencies, use v1.0 configs (disable
unified_tokensstrategy)
Option 1: Full Foundation (Recommended)
# Download Foundation config
curl -o .protocol-enforcer.json https://gist.githubusercontent.com/mpalpha/c2f1723868c86343e590ed38e80f264d/raw/config.foundation.json
# Reload IDEOption 2: Selective Adoption Update your existing config:
{
"version": "2.0.0",
"strategies_enabled": {
"session_tokens": true, // Enable for 60-min workflows
"extended_enforcement": true, // Add WebSearch/Grep/Task to enforcement
"macro_gates": true, // Simplify checklist to 3 gates
"fast_track_mode": true, // Skip PLAN for trivial changes
"unified_tokens": false // Disable to avoid uuid dependency
}
}Option 3: Stay on v1.0 No changes needed - v2.0 is fully backward compatible with v1.0 configs.
| Platform | Config File | Hook Support | Enforcement |
|---|---|---|---|
| Claude Code | .mcp.json or ~/.claude.json |
✅ Full (all 5 hooks) | Automatic blocking |
| Cursor | ~/.cursor/mcp.json |
✅ Standard (PreToolUse) | Automatic blocking |
| Cline | ~/.cline/mcp.json |
Audit only | |
| Zed | ~/.config/zed/mcp.json |
❌ None | Voluntary |
| Continue | ~/.continue/mcp.json |
Voluntary |
Available Hooks: user_prompt_submit, session_start, pre_tool_use, post_tool_use, stop
Complete setup in 6 steps - installs MCP server + hooks for automatic enforcement.
Add to your platform's MCP config file (paths above):
{
"mcpServers": {
"protocol-enforcer": {
"command": "npx",
"args": ["-y", "https://gist.github.com/mpalpha/c2f1723868c86343e590ed38e80f264d"]
}
}
}Claude Code only: If using .claude/settings.local.json with enabledMcpjsonServers, add "protocol-enforcer".
Download the Foundation config template (recommended):
curl -o .protocol-enforcer.json \
https://gist.githubusercontent.com/mpalpha/c2f1723868c86343e590ed38e80f264d/raw/config.foundation.jsonOr create minimal config in .protocol-enforcer.json:
{
"enforced_rules": {
"require_protocol_steps": [
{
"name": "planning",
"hook": "pre_tool_use",
"applies_to": ["Write", "Edit"]
}
],
"require_checklist_confirmation": true,
"minimum_checklist_items": 2
},
"checklist_items": [
{
"text": "Requirements gathered",
"hook": "pre_tool_use"
},
{
"text": "Existing patterns analyzed",
"hook": "pre_tool_use"
},
{
"text": "Linting passed",
"hook": "post_tool_use"
}
]
}See: Example Configurations for more options.
Create hooks directory:
mkdir -p .cursor/hooksCreate hook files from Appendix C:
You need to create 3 hook scripts. See Appendix C: Hook Scripts Reference for complete code.
Required hooks:
.cursor/hooks/pre-tool-use.cjs- Blocks Write/Edit operations without valid tokens (CRITICAL FOR ENFORCEMENT).cursor/hooks/post-tool-use.cjs- Logs successful operations to audit trail.cursor/hooks/user-prompt-submit.cjs- Reminds Claude to use informed_reasoning first
Quick setup (AI-assisted installation recommended):
- If installing via Claude: "Create hooks from Appendix C"
- If installing manually: Copy hook code from Appendix C sections below
Make all executable:
chmod +x .cursor/hooks/*.cjsWhat each hook does:
- pre-tool-use.cjs - Blocks Write/Edit operations without valid tokens (CRITICAL FOR ENFORCEMENT)
- post-tool-use.cjs - Logs successful operations to audit trail
- user-prompt-submit.cjs - Reminds Claude to use informed_reasoning first
To skip hooks: MCP tools will still work, but no automatic blocking. Claude must voluntarily comply.
Add to .claude/settings.json (project) or ~/.claude/settings.json (user):
{
"hooks": {
"UserPromptSubmit": [{
"hooks": [{
"type": "command",
"command": "${workspaceFolder}/.cursor/hooks/user-prompt-submit.cjs"
}]
}],
"PreToolUse": [{
"matcher": "Write|Edit|NotebookEdit",
"hooks": [{
"type": "command",
"command": "${workspaceFolder}/.cursor/hooks/pre-tool-use.cjs"
}]
}],
"PostToolUse": [{
"matcher": "Write|Edit|NotebookEdit",
"hooks": [{
"type": "command",
"command": "${workspaceFolder}/.cursor/hooks/post-tool-use.cjs"
}]
}]
}
}Note: Replace ${workspaceFolder} with absolute path if needed.
- Claude Code/Cursor/VSCode:
Cmd+Shift+P→ "Developer: Reload Window" - Zed: Restart Zed
Test MCP connection:
await mcp__protocol_enforcer__get_protocol_config();Test enforcement (should be blocked):
// Try to write without authorization - should fail
await Write({ file_path: "test.txt", content: "test" });
// Expected: ⛔ protocol-enforcer: must call informed_reasoning (analyze phase) firstWhen a user requests installation:
- Detect platform - Check which AI assistant and MCP config location
- Analyze project - Read ALL rule files (
.cursor/rules/**/*,.cursorrules,.github/**/*,docs/**/*, etc.) - Extract requirements - Identify protocol steps, checklist items, and behavioral rules
- Determine hook support - Configure based on platform capabilities (see table above)
- Propose configuration - Present tailored config matching project workflow
- Get approval - Confirm before creating files
Detailed guide: See Appendix A: AI Agent Installation Guide
All protocol steps and checklist items must be objects with a hook property (string format not supported).
Protocol Step Object:
{
"name": "step_name",
"hook": "pre_tool_use",
"applies_to": ["Write", "Edit"] // Optional: tool-specific filtering
}Checklist Item Object:
{
"text": "Item description",
"hook": "pre_tool_use",
"applies_to": ["Write", "Edit"] // Optional: tool-specific filtering
}| Hook | When | Use Case |
|---|---|---|
user_prompt_submit |
Before processing user message | Pre-response checks, sequential thinking |
session_start |
At session initialization | Display requirements, initialize tracking |
pre_tool_use |
Before tool execution | Primary enforcement point for file operations |
post_tool_use |
After tool execution | Validation, linting, audit logging |
stop |
Before session termination | Compliance reporting, cleanup |
Minimal (3 items):
{
"enforced_rules": {
"require_protocol_steps": [
{ "name": "sequential_thinking", "hook": "user_prompt_submit" },
{ "name": "planning", "hook": "pre_tool_use", "applies_to": ["Write", "Edit"] }
],
"require_checklist_confirmation": true,
"minimum_checklist_items": 2
},
"checklist_items": [
{ "text": "Sequential thinking completed FIRST", "hook": "user_prompt_submit" },
{ "text": "Plan created and confirmed", "hook": "pre_tool_use" },
{ "text": "Completion verified", "hook": "post_tool_use" }
]
}See also:
config.minimal.json- Basic workflow (6 items)config.development.json- Full development workflow (17 items)config.behavioral.json- LLM behavioral corrections (12 items)
Verify protocol steps completed for a specific hook.
Parameters:
| Parameter | Type | Required | Description |
|---|---|---|---|
hook |
string | ✅ Yes | Lifecycle point: user_prompt_submit, session_start, pre_tool_use, post_tool_use, stop |
tool_name |
string | No | Tool being called (Write, Edit) for tool-specific filtering |
protocol_steps_completed |
string[] | ✅ Yes | Completed step names from config |
checklist_items_checked |
string[] | ✅ Yes | Verified checklist items from config |
Example:
const verification = await mcp__protocol_enforcer__verify_protocol_compliance({
hook: "pre_tool_use",
tool_name: "Write",
protocol_steps_completed: ["planning", "analysis"],
checklist_items_checked: ["Plan confirmed", "Patterns analyzed"]
});
// Returns: { compliant: true, operation_token: "abc123...", token_expires_in_seconds: 60 }
// Or: { compliant: false, violations: [...] }MANDATORY before Write/Edit (when using PreToolUse hooks).
Parameters:
| Parameter | Type | Required | Description |
|---|---|---|---|
operation_token |
string | ✅ Yes | Token from verify_protocol_compliance |
Token rules: Single-use, 60-second expiration, writes ~/.protocol-enforcer-token for hook verification.
Get current configuration.
Returns: { config_path: "...", config: {...} }
Get compliance statistics and recent violations.
Returns: { total_checks: N, passed: N, failed: N, recent_violations: [...] }
Create new config file.
Parameters: scope: "project" | "user"
// 1. MANDATORY FIRST STEP: Call informed_reasoning (analyze phase)
// This writes ~/.protocol-informed-reasoning-token
await mcp__memory_augmented_reasoning__informed_reasoning({
phase: "analyze",
problem: "User request description and context needed"
});
// 2. At user message (user_prompt_submit hook) - Optional verification
await mcp__protocol_enforcer__verify_protocol_compliance({
hook: "user_prompt_submit",
protocol_steps_completed: ["informed_reasoning_analyze"],
checklist_items_checked: ["Used informed_reasoning (analyze phase) tool"]
});
// 3. Before file operations (pre_tool_use hook)
const verification = await mcp__protocol_enforcer__verify_protocol_compliance({
hook: "pre_tool_use",
tool_name: "Write",
protocol_steps_completed: ["informed_reasoning_analyze", "planning", "analysis"],
checklist_items_checked: [
"Used informed_reasoning (analyze phase) tool before proceeding",
"Plan confirmed",
"Patterns analyzed"
]
});
// 4. Authorize file operation
// This writes ~/.protocol-enforcer-token
if (verification.compliant) {
await mcp__protocol_enforcer__authorize_file_operation({
operation_token: verification.operation_token
});
// Now Write/Edit operations allowed
// PreToolUse hook will check for BOTH tokens
}
// 5. After file operations (post_tool_use hook)
await mcp__protocol_enforcer__verify_protocol_compliance({
hook: "post_tool_use",
tool_name: "Write",
protocol_steps_completed: ["execution"],
checklist_items_checked: ["Linting passed", "Types checked"]
});- Only rules with matching
hookvalue are checked - If
applies_tospecified, tool name must match - Enables context-specific enforcement at different lifecycle points
For automatic blocking of unauthorized file operations (Claude Code, Cursor only).
- Create hooks directory:
mkdir -p .cursor/hooks-
Create hook scripts from templates (see Appendix C)
-
Make executable:
chmod +x .cursor/hooks/*.cjs- Configure platform:
Claude Code CLI (v2.1.7+) - Add to .claude/settings.json (project) or ~/.claude/settings.json (user):
{
"hooks": {
"PreToolUse": [
{
"matcher": "Write|Edit|NotebookEdit",
"hooks": [
{
"type": "command",
"command": "/absolute/path/.cursor/hooks/pre-tool-use.cjs"
}
]
}
],
"PostToolUse": [
{
"matcher": "Write|Edit|NotebookEdit",
"hooks": [
{
"type": "command",
"command": "/absolute/path/.cursor/hooks/post-tool-use.cjs"
}
]
}
]
}
}IMPORTANT: Hook response format as of Claude Code v2.1.7+ (Jan 2026):
// Deny operation
console.log(JSON.stringify({
hookSpecificOutput: {
hookEventName: "PreToolUse",
permissionDecision: "deny",
permissionDecisionReason: "Reason shown to Claude/user"
}
}));
// Allow operation
console.log(JSON.stringify({
hookSpecificOutput: {
hookEventName: "PreToolUse",
permissionDecision: "allow"
}
}));Replace /absolute/path/ with your actual project path.
1. AI calls informed_reasoning (analyze phase)
→ writes ~/.protocol-informed-reasoning-token
2. AI calls verify_protocol_compliance → receives operation_token (60s expiration)
3. AI calls authorize_file_operation(token) → writes ~/.protocol-enforcer-token
4. AI attempts Write/Edit → PreToolUse hook intercepts
CHECK 1: informed_reasoning token exists?
CHECK 2: protocol-enforcer token exists?
- Both found → consume both (delete), allow operation
- Either missing → block operation
5. Next Write/Edit → Both tokens missing → blocked
Result: Two-factor verification ensures both thinking and protocol compliance.
Why Two Tokens?
- Reasoning Token: Physical proof that informed_reasoning tool was actually called
- Protocol Token: Authorization after verifying all protocol steps and checklist items
- Cross-Process Verification: Hooks run separately from MCP server, tokens provide shared state
Add to your project's supervisor rules:
Claude Code: .cursor/rules/protocol-enforcer.mdc
Cursor: .cursorrules
Cline: .clinerules
Continue: .continuerules
## Protocol Enforcer Integration (MANDATORY)
Before ANY file write/edit operation:
1. Complete required protocol steps from `.protocol-enforcer.json`
2. Call `mcp__protocol_enforcer__verify_protocol_compliance` with:
- `hook`: lifecycle point (e.g., "pre_tool_use")
- `protocol_steps_completed`: completed step names
- `checklist_items_checked`: verified items
3. If `compliant: false`, fix violations and retry
4. Call `mcp__protocol_enforcer__authorize_file_operation` with token
5. Only proceed if `authorized: true`
**No exceptions allowed.**See: Appendix B: Complete Supervisor Examples for platform-specific examples.
| Issue | Solution |
|---|---|
| Server not appearing | Check config file syntax, gist URL, file location, reload IDE |
| Configuration not loading | Verify .protocol-enforcer.json filename, check JSON syntax |
| Tools not working | Test with get_protocol_config, check tool names (must use full mcp__protocol-enforcer__*) |
| Hook not blocking | Verify platform support, check hook executable (chmod +x), verify absolute path, reload IDE |
| Token errors | Check ~/.protocol-enforcer-token exists after authorize_file_operation |
Claude Code only: Add "protocol-enforcer" to enabledMcpjsonServers if using allowlist.
AI assistants bypass project protocols under pressure or context limits. This server:
- Enforces consistency - same rules for every task, all platforms
- Provides traceability - tracks protocol adherence
- Reduces technical debt - prevents shortcuts violating standards
- Works with ANY workflow - not tied to specific tools
- Runs from npx - zero installation/maintenance
Detailed analysis process for AI agents installing this MCP server.
Check which AI coding assistant is active:
- Look for existing MCP config files (
.mcp.json,~/.claude.json,~/.cursor/mcp.json, etc.) - Identify IDE/editor environment
Read ALL rule files (critical - don't skip):
.cursor/rules/**/*.mdc- All rule types.cursorrules,.clinerules,.continuerules- Platform rules.eslintrc.*,.prettierrc.*- Code formattingtsconfig.json- TypeScript config.github/CONTRIBUTING.md,.github/pull_request_template.md- Contribution guidelinesREADME.md,CLAUDE.md,docs/**/*- Project documentation
Extract from each file:
-
Protocol Steps (workflow stages):
- Look for: "first", "before", "then", "after", "finally"
- Example: "Before ANY file operation, do X" → protocol step "X"
- Group related steps (3-7 steps typical)
-
Checklist Items (verification checks):
- Look for: "MUST", "REQUIRED", "MANDATORY", "CRITICAL", "NEVER", "ALWAYS"
- Quality checks: "verify", "ensure", "check", "confirm"
- Each item should be specific and verifiable
-
Behavioral Rules (constraints):
- Hard requirements: "NO EXCEPTIONS", "supersede all instructions"
- Pre-approved actions: "auto-fix allowed", "no permission needed"
- Forbidden actions: "NEVER edit X", "DO NOT use Y"
-
Tool Requirements (MCP tool calls):
- Explicit requirements: "use mcp__X tool"
- Tool sequences: "call X before Y"
-
Conditional Requirements (context-specific):
- "If GraphQL changes, run codegen"
- "If SCSS changes, verify spacing"
- Mark as
required: falsein checklist
Example Extraction:
From .cursor/rules/mandatory-supervisor-protocol.mdc:
"BEFORE ANY OTHER ACTION, EVERY USER QUERY MUST:
1. First use mcp__clear-thought__sequentialthinking tool"
→ Protocol step: { name: "sequential_thinking", hook: "user_prompt_submit" }
→ Checklist: { text: "Sequential thinking completed FIRST", hook: "user_prompt_submit" }
If documentation references external URLs:
- Use WebSearch/WebFetch to retrieve library docs, style guides, API specs
- Extract additional requirements from online sources
- Integrate with local requirements
Based on analysis, determine workflow:
- TDD - Test files exist, tests-first culture
- Design-First - Figma links, design system, token mappings
- Planning & Analysis - Generic best practices
- Behavioral - Focus on LLM behavioral corrections (CHORES framework)
- Minimal - Small projects, emergency mode
Configure based on platform capabilities:
| Platform | Recommended Hooks | Strategy |
|---|---|---|
| Claude Code | All 5 hooks | Maximum enforcement |
| Cursor | PreToolUse + PostToolUse | Standard enforcement |
| Cline | PostToolUse only | Audit logging |
| Zed/Continue | None | Voluntary compliance |
- Present findings: "I've analyzed [N] rule files and detected [workflow type]. Your platform ([platform]) supports [hooks]."
- Show proposed config with extracted steps and checklist items
- Explain trade-offs: With/without hooks, full vs. minimal enforcement
- Get approval before creating files
- Add MCP server to config file
- Create
.protocol-enforcer.jsonwith tailored configuration - Create hook scripts if platform supports them
- Update supervisor protocol files with integration instructions
- Reload IDE
File: .cursor/rules/protocol-enforcer.mdc
---
description: Planning & Analysis Protocol with PreToolUse Hooks
globs:
alwaysApply: true
---
## Protocol Enforcer Integration (MANDATORY)
### Required Steps (from .protocol-enforcer.json):
1. **sequential_thinking** - Complete before responding
2. **planning** - Plan implementation with objectives
3. **analysis** - Analyze codebase for reusable patterns
### Required Checklist:
- Sequential thinking completed FIRST
- Searched for reusable components/utilities
- Matched existing code patterns
- Plan confirmed by user
### Workflow:
**CRITICAL OVERRIDE RULE:**
BEFORE ANY ACTION, call `mcp__clear-thought__sequentialthinking` then `mcp__protocol_enforcer__verify_protocol_compliance`.
NO EXCEPTIONS.
**Process:**
1. **Sequential Thinking** (user_prompt_submit hook)
- Use sequentialthinking tool
- Verify: `mcp__protocol_enforcer__verify_protocol_compliance({ hook: "user_prompt_submit", ... })`
2. **Planning**
- Define objectives, files to modify, dependencies
- Mark `planning` complete
3. **Analysis**
- Search codebase for similar features
- Review `src/components/`, `src/hooks/`, `src/utils/`
- Mark `analysis` complete
4. **Verify Compliance**
```typescript
const v = await mcp__protocol_enforcer__verify_protocol_compliance({
hook: "pre_tool_use",
tool_name: "Write",
protocol_steps_completed: ["planning", "analysis"],
checklist_items_checked: [
"Searched for reusable components/utilities",
"Matched existing code patterns",
"Plan confirmed by user"
]
});-
Authorize
await mcp__protocol_enforcer__authorize_file_operation({ operation_token: v.operation_token });
-
Implement
- Only after authorization
- Minimal changes only
- No scope creep
PreToolUse hooks block unauthorized file operations. Token required per file change (60s expiration).
**Config:** `config.development.json`
---
#### Example 2: Design-First (Cursor)
**File:** `.cursorrules`
- design_review - Review Figma specs
- component_mapping - Map to existing/new components
- Design tokens mapped to SCSS variables
- Figma specs reviewed
- Accessibility requirements checked
- Responsive breakpoints defined
- Open Figma, extract design tokens (colors, spacing, typography)
- Note accessibility (ARIA, keyboard nav)
- Document responsive breakpoints
- Search for similar components
- Decide: reuse, extend, or create
- Map Figma tokens to SCSS variables
mcp__protocol_enforcer__verify_protocol_compliance({
hook: "pre_tool_use",
tool_name: "Write",
protocol_steps_completed: ["design_review", "component_mapping"],
checklist_items_checked: [
"Design tokens mapped to SCSS variables",
"Figma specs reviewed",
"Accessibility requirements checked"
]
})
After verification, authorize then proceed with component implementation.
**Config:** Custom design-focused config with `design_review` and `component_mapping` steps.
---
#### Example 3: Behavioral Corrections (Any Platform)
**File:** `.cursor/rules/behavioral-protocol.mdc`
```markdown
---
description: LLM Behavioral Corrections (MODEL Framework CHORES)
alwaysApply: true
---
## Protocol Enforcer Integration (MANDATORY)
Enforces behavioral corrections from MODEL Framework CHORES analysis.
### Required Steps:
1. **analyze_behavior** - Analyze response for CHORES issues
2. **apply_chores_fixes** - Apply corrections before file operations
### Required Checklist (CHORES):
- **C**onstraint issues addressed (structure/format adherence)
- **H**allucination issues addressed (no false information)
- **O**verconfidence addressed (uncertainty when appropriate)
- **R**easoning issues addressed (logical consistency)
- **E**thical/Safety issues addressed (no harmful content)
- **S**ycophancy addressed (truthfulness over agreement)
### Workflow:
1. **Analyze Behavior** (user_prompt_submit)
- Review response for CHORES issues
- Verify: `mcp__protocol_enforcer__verify_protocol_compliance({ hook: "user_prompt_submit", ... })`
2. **Apply Fixes** (pre_tool_use)
- Address identified CHORES issues
- Verify all checklist items before file ops
- Authorize with token
### Enforcement:
This config uses the default behavioral corrections from `index.js` DEFAULT_CONFIG.
Config: config.behavioral.json
File: .protocol-enforcer.json (minimal)
{
"enforced_rules": {
"require_protocol_steps": [
{ "name": "acknowledge", "hook": "pre_tool_use" }
],
"require_checklist_confirmation": true,
"minimum_checklist_items": 1
},
"checklist_items": [
{ "text": "I acknowledge this change", "hook": "pre_tool_use" }
]
}Use: Emergency fixes, rapid prototyping only.
| Feature | Claude Code | Cursor | Cline | Zed/Continue |
|---|---|---|---|---|
| Hooks Available | All 5 | PreToolUse + PostToolUse | PostToolUse | None |
| Automatic Blocking | ✅ Yes | ✅ Yes | ❌ No | ❌ No |
| Recommended Steps | 5-7 steps | 3-5 steps | 2-3 steps | 1-2 steps |
| Enforcement Level | Maximum | Standard | Audit | Voluntary |
| Best For | Production | Development | Code review | Minimal |
All 5 hook scripts for creating in .cursor/hooks/ or .cline/hooks/.
Blocks unauthorized file operations without valid tokens. Two-token verification system ensures both informed_reasoning and protocol compliance.
Updated for Claude Code v2.1.7+ (Jan 2026):
#!/usr/bin/env node
const fs = require('fs');
const path = require('path');
const os = require('os');
let input = '';
process.stdin.on('data', chunk => input += chunk);
process.stdin.on('end', () => {
const tokenFile = path.join(os.homedir(), '.protocol-enforcer-token');
const reasoningTokenFile = path.join(os.homedir(), '.protocol-informed-reasoning-token');
// CHECK 1: Verify informed_reasoning was called
if (!fs.existsSync(reasoningTokenFile)) {
const response = {
hookSpecificOutput: {
hookEventName: "PreToolUse",
permissionDecision: "deny",
permissionDecisionReason: "⛔ protocol-enforcer: must call informed_reasoning (analyze phase) first"
}
};
console.log(JSON.stringify(response));
process.stderr.write('\n⛔ protocol-enforcer: informed_reasoning not called\n');
process.exit(0);
}
// CHECK 2: Verify protocol compliance authorization
if (!fs.existsSync(tokenFile)) {
const response = {
hookSpecificOutput: {
hookEventName: "PreToolUse",
permissionDecision: "deny",
permissionDecisionReason: "⛔ protocol-enforcer: call mcp__protocol-enforcer__authorize_file_operation"
}
};
console.log(JSON.stringify(response));
process.stderr.write('\n⛔ protocol-enforcer: operation not authorized\n');
process.exit(0);
}
// Both tokens exist - consume them and allow
try {
fs.unlinkSync(reasoningTokenFile);
fs.unlinkSync(tokenFile);
const response = {
hookSpecificOutput: {
hookEventName: "PreToolUse",
permissionDecision: "allow",
permissionDecisionReason: "✅ protocol-enforcer: all requirements met"
}
};
console.log(JSON.stringify(response));
process.stderr.write('✅ protocol-enforcer: operation authorized (protocol + informed_reasoning verified)\n');
process.exit(0);
} catch (e) {
const response = {
hookSpecificOutput: {
hookEventName: "PreToolUse",
permissionDecision: "deny",
permissionDecisionReason: `⛔ protocol-enforcer: token error - ${e.message}`
}
};
console.log(JSON.stringify(response));
process.stderr.write(`\n⛔ protocol-enforcer: token consumption failed - ${e.message}\n`);
process.exit(0);
}
});Logs successful operations to audit trail.
#!/usr/bin/env node
const fs = require('fs');
const path = require('path');
const os = require('os');
let input = '';
process.stdin.on('data', chunk => input += chunk);
process.stdin.on('end', () => {
try {
const hookData = JSON.parse(input);
const logFile = path.join(os.homedir(), '.protocol-enforcer-audit.log');
const logEntry = {
timestamp: new Date().toISOString(),
tool: hookData.toolName || 'unknown',
session: hookData.sessionId || 'unknown',
success: true
};
fs.appendFileSync(logFile, JSON.stringify(logEntry) + '\n', 'utf8');
process.exit(0);
} catch (e) {
process.exit(0); // Silent fail - don't block on logging errors
}
});Enforces CRITICAL OVERRIDE RULES, blocks bypass attempts.
#!/usr/bin/env node
const fs = require('fs');
const path = require('path');
let input = '';
process.stdin.on('data', chunk => input += chunk);
process.stdin.on('end', () => {
try {
const hookData = JSON.parse(input);
const userPrompt = hookData.userPrompt || '';
// Detect bypass attempts
const bypassPatterns = [
/ignore.*protocol/i,
/skip.*verification/i,
/bypass.*enforcer/i,
/disable.*mcp/i
];
for (const pattern of bypassPatterns) {
if (pattern.test(userPrompt)) {
process.stderr.write('⛔ BYPASS ATTEMPT DETECTED: Protocol enforcement cannot be disabled.\n');
process.exit(2); // Block
}
}
// Inject protocol reminder for file operations
if (/write|edit|create|modify/i.test(userPrompt)) {
const reminder = '\n\n[PROTOCOL REMINDER: Before file operations, call mcp__protocol-enforcer__verify_protocol_compliance and mcp__protocol-enforcer__authorize_file_operation]';
console.log(JSON.stringify({
userPrompt: userPrompt + reminder
}));
} else {
console.log(input); // Pass through unchanged
}
process.exit(0);
} catch (e) {
console.log(input); // Pass through on error
process.exit(0);
}
});Initializes compliance tracking, displays protocol requirements.
#!/usr/bin/env node
const fs = require('fs');
const path = require('path');
let input = '';
process.stdin.on('data', chunk => input += chunk);
process.stdin.on('end', () => {
try {
// Load .protocol-enforcer.json
const cwd = process.cwd();
const configPath = path.join(cwd, '.protocol-enforcer.json');
if (fs.existsSync(configPath)) {
const config = JSON.parse(fs.readFileSync(configPath, 'utf8'));
console.error('\n📋 Protocol Enforcer Active\n');
console.error('Required Protocol Steps:');
config.enforced_rules.require_protocol_steps.forEach(step => {
console.error(` - ${step.name} (hook: ${step.hook})`);
});
console.error(`\nMinimum Checklist Items: ${config.enforced_rules.minimum_checklist_items}\n`);
}
process.exit(0);
} catch (e) {
process.exit(0); // Silent fail
}
});Generates compliance report at end of response.
#!/usr/bin/env node
const fs = require('fs');
const path = require('path');
const os = require('os');
let input = '';
process.stdin.on('data', chunk => input += chunk);
process.stdin.on('end', () => {
try {
// Check for unused tokens
const tokenFile = path.join(os.homedir(), '.protocol-enforcer-token');
if (fs.existsSync(tokenFile)) {
console.error('\n⚠️ Unused authorization token detected - was file operation skipped?\n');
fs.unlinkSync(tokenFile); // Cleanup
}
// Read audit log for session summary
const logFile = path.join(os.homedir(), '.protocol-enforcer-audit.log');
if (fs.existsSync(logFile)) {
const logs = fs.readFileSync(logFile, 'utf8').trim().split('\n');
const recentLogs = logs.slice(-10); // Last 10 operations
console.error('\n📊 Session Compliance Summary:');
console.error(`Total operations logged: ${recentLogs.length}`);
}
process.exit(0);
} catch (e) {
process.exit(0); // Silent fail
}
});Enhanced hooks for preserving session state across context resets and compactions.
These hooks work together to create a robust context persistence system with validation, cleanup, and error logging.
Auto-saves session state before context compaction with validation and error logging.
#!/usr/bin/env node
/**
* PreCompact Hook - Auto-save handoff before context compaction
* Preserves session state in JSON format (no dependencies required)
*/
const fs = require('fs');
const path = require('path');
// Validation function
function validateHandoff(handoff) {
const errors = [];
// Check required fields
if (!handoff.date) errors.push('Missing date');
if (!handoff.session_id) errors.push('Missing session_id');
if (!handoff.status) errors.push('Missing status');
// Check tasks structure
if (!handoff.tasks || typeof handoff.tasks !== 'object') {
errors.push('Missing or invalid tasks object');
} else {
if (!Array.isArray(handoff.tasks.completed)) errors.push('tasks.completed must be array');
if (!Array.isArray(handoff.tasks.in_progress)) errors.push('tasks.in_progress must be array');
if (!Array.isArray(handoff.tasks.pending)) errors.push('tasks.pending must be array');
if (!Array.isArray(handoff.tasks.blockers)) errors.push('tasks.blockers must be array');
}
// Check decisions and next_steps
if (!Array.isArray(handoff.decisions)) errors.push('decisions must be array');
if (!Array.isArray(handoff.next_steps)) errors.push('next_steps must be array');
return { valid: errors.length === 0, errors };
}
// Error logging function
function logError(handoffDir, hookName, error) {
try {
const logPath = path.join(handoffDir, 'errors.log');
const timestamp = new Date().toISOString();
const logEntry = `[${timestamp}] [${hookName}] ${error}\n`;
fs.appendFileSync(logPath, logEntry, 'utf8');
} catch (e) {
// Silent fail on logging error
}
}
let input = '';
process.stdin.on('data', chunk => input += chunk);
process.stdin.on('end', () => {
try {
const hookData = JSON.parse(input);
const sessionId = hookData.sessionId || 'unknown';
const trigger = hookData.trigger || 'unknown';
const cwd = hookData.cwd || process.cwd();
// Create handoff directory
const handoffDir = path.join(cwd, '.claude', 'handoffs');
if (!fs.existsSync(handoffDir)) {
fs.mkdirSync(handoffDir, { recursive: true });
}
// Create handoff document
const handoff = {
date: new Date().toISOString(),
session_id: sessionId,
trigger: trigger,
summary: "Session state preserved before compaction",
status: "in_progress",
context: {
cwd: cwd,
compaction_type: trigger
},
tasks: {
completed: [],
in_progress: [],
pending: [],
blockers: []
},
decisions: [],
next_steps: []
};
// Validate handoff
const validation = validateHandoff(handoff);
if (!validation.valid) {
logError(handoffDir, 'PreCompact', `Validation failed: ${validation.errors.join(', ')}`);
console.error(`\n⚠️ PreCompact: Handoff validation failed (${validation.errors.length} errors) - saving anyway`);
}
// Save handoff
const timestamp = new Date().toISOString().replace(/[:.]/g, '-');
const filename = `${sessionId}_${timestamp}.json`;
const handoffPath = path.join(handoffDir, filename);
fs.writeFileSync(handoffPath, JSON.stringify(handoff, null, 2), 'utf8');
// Update latest.json reference
const latestPath = path.join(handoffDir, 'latest.json');
fs.writeFileSync(latestPath, JSON.stringify({
latest_handoff: filename,
created: handoff.date,
session_id: sessionId
}, null, 2), 'utf8');
// Write to stderr for visibility
console.error(`\n📋 PreCompact: Handoff saved to ${filename}`);
console.error(` Trigger: ${trigger}`);
if (!validation.valid) {
console.error(` Validation: ${validation.errors.length} issues detected`);
}
process.exit(0);
} catch (e) {
const cwd = process.cwd();
const handoffDir = path.join(cwd, '.claude', 'handoffs');
logError(handoffDir, 'PreCompact', `Exception: ${e.message}`);
console.error(`\n⚠️ PreCompact hook error: ${e.message}`);
process.exit(0);
}
});Creates final handoff with cleanup and validation when session ends.
#!/usr/bin/env node
/**
* SessionEnd Hook - Create final handoff and cleanup
* Captures session outcome and prepares for next session
*/
const fs = require('fs');
const path = require('path');
// Validation function
function validateHandoff(handoff) {
const errors = [];
// Check required fields
if (!handoff.date) errors.push('Missing date');
if (!handoff.session_id) errors.push('Missing session_id');
if (!handoff.status) errors.push('Missing status');
// Check tasks structure
if (!handoff.tasks || typeof handoff.tasks !== 'object') {
errors.push('Missing or invalid tasks object');
} else {
if (!Array.isArray(handoff.tasks.completed)) errors.push('tasks.completed must be array');
if (!Array.isArray(handoff.tasks.in_progress)) errors.push('tasks.in_progress must be array');
if (!Array.isArray(handoff.tasks.pending)) errors.push('tasks.pending must be array');
if (!Array.isArray(handoff.tasks.blockers)) errors.push('tasks.blockers must be array');
}
// Check decisions and next_steps
if (!Array.isArray(handoff.decisions)) errors.push('decisions must be array');
if (!Array.isArray(handoff.next_steps)) errors.push('next_steps must be array');
return { valid: errors.length === 0, errors };
}
// Cleanup function - keep last 10 handoff files
function cleanupOldHandoffs(handoffDir) {
try {
const files = fs.readdirSync(handoffDir)
.filter(f => f.endsWith('.json') && f !== 'latest.json')
.sort()
.reverse();
if (files.length > 10) {
const toDelete = files.slice(10);
toDelete.forEach(f => {
fs.unlinkSync(path.join(handoffDir, f));
});
return toDelete.length;
}
return 0;
} catch (e) {
return 0;
}
}
// Error logging function
function logError(handoffDir, hookName, error) {
try {
const logPath = path.join(handoffDir, 'errors.log');
const timestamp = new Date().toISOString();
const logEntry = `[${timestamp}] [${hookName}] ${error}\n`;
fs.appendFileSync(logPath, logEntry, 'utf8');
} catch (e) {
// Silent fail on logging error
}
}
let input = '';
process.stdin.on('data', chunk => input += chunk);
process.stdin.on('end', () => {
try {
const hookData = JSON.parse(input);
const sessionId = hookData.sessionId || 'unknown';
const reason = hookData.reason || 'unknown';
const cwd = hookData.cwd || process.cwd();
// Create handoff directory
const handoffDir = path.join(cwd, '.claude', 'handoffs');
if (!fs.existsSync(handoffDir)) {
fs.mkdirSync(handoffDir, { recursive: true });
}
// Read latest handoff if exists
const latestPath = path.join(handoffDir, 'latest.json');
let existingHandoff = {
tasks: { completed: [], in_progress: [], pending: [], blockers: [] },
decisions: [],
next_steps: []
};
if (fs.existsSync(latestPath)) {
try {
const latestInfo = JSON.parse(fs.readFileSync(latestPath, 'utf8'));
if (latestInfo.latest_handoff) {
const existingPath = path.join(handoffDir, latestInfo.latest_handoff);
if (fs.existsSync(existingPath)) {
existingHandoff = JSON.parse(fs.readFileSync(existingPath, 'utf8'));
}
}
} catch (e) {
logError(handoffDir, 'SessionEnd', `Error reading existing handoff: ${e.message}`);
}
}
// Create final handoff
const handoff = {
date: new Date().toISOString(),
session_id: sessionId,
session_end_reason: reason,
status: "completed",
context: {
cwd: cwd,
ended_at: new Date().toISOString()
},
tasks: existingHandoff.tasks,
decisions: existingHandoff.decisions,
next_steps: existingHandoff.next_steps,
notes: "Session ended. Review tasks and update handoff manually if needed."
};
// Validate handoff
const validation = validateHandoff(handoff);
if (!validation.valid) {
logError(handoffDir, 'SessionEnd', `Validation failed: ${validation.errors.join(', ')}`);
console.error(`\n⚠️ SessionEnd: Handoff validation failed (${validation.errors.length} errors) - saving anyway`);
}
// Save final handoff
const timestamp = new Date().toISOString().replace(/[:.]/g, '-');
const filename = `${sessionId}_final_${timestamp}.json`;
const handoffPath = path.join(handoffDir, filename);
fs.writeFileSync(handoffPath, JSON.stringify(handoff, null, 2), 'utf8');
// Update latest reference
fs.writeFileSync(latestPath, JSON.stringify({
latest_handoff: filename,
created: handoff.date,
session_id: sessionId,
final: true
}, null, 2), 'utf8');
// Cleanup old handoffs
const deletedCount = cleanupOldHandoffs(handoffDir);
if (deletedCount > 0) {
console.error(` Cleanup: Deleted ${deletedCount} old handoff file(s)`);
}
// Write to stderr for visibility
console.error(`\n📋 SessionEnd: Final handoff saved to ${filename}`);
console.error(` Reason: ${reason}`);
if (!validation.valid) {
console.error(` Validation: ${validation.errors.length} issues detected`);
}
console.error(` Review: .claude/handoffs/${filename}`);
process.exit(0);
} catch (e) {
const cwd = process.cwd();
const handoffDir = path.join(cwd, '.claude', 'handoffs');
logError(handoffDir, 'SessionEnd', `Exception: ${e.message}`);
console.error(`\n⚠️ SessionEnd hook error: ${e.message}`);
process.exit(0);
}
});Loads previous handoff with validation and graceful degradation.
#!/usr/bin/env node
/**
* SessionStart Hook - Load previous handoff into context
* Provides continuity by injecting previous session state
*/
const fs = require('fs');
const path = require('path');
// Validation function
function validateHandoff(handoff) {
const errors = [];
// Check required fields
if (!handoff.date) errors.push('Missing date');
if (!handoff.session_id) errors.push('Missing session_id');
if (!handoff.status) errors.push('Missing status');
// Check tasks structure
if (!handoff.tasks || typeof handoff.tasks !== 'object') {
errors.push('Missing or invalid tasks object');
} else {
if (!Array.isArray(handoff.tasks.completed)) errors.push('tasks.completed must be array');
if (!Array.isArray(handoff.tasks.in_progress)) errors.push('tasks.in_progress must be array');
if (!Array.isArray(handoff.tasks.pending)) errors.push('tasks.pending must be array');
if (!Array.isArray(handoff.tasks.blockers)) errors.push('tasks.blockers must be array');
}
// Check decisions and next_steps
if (!Array.isArray(handoff.decisions)) errors.push('decisions must be array');
if (!Array.isArray(handoff.next_steps)) errors.push('next_steps must be array');
return { valid: errors.length === 0, errors };
}
// Error logging function
function logError(handoffDir, hookName, error) {
try {
const logPath = path.join(handoffDir, 'errors.log');
const timestamp = new Date().toISOString();
const logEntry = `[${timestamp}] [${hookName}] ${error}\n`;
fs.appendFileSync(logPath, logEntry, 'utf8');
} catch (e) {
// Silent fail on logging error
}
}
let input = '';
process.stdin.on('data', chunk => input += chunk);
process.stdin.on('end', () => {
try {
const hookData = JSON.parse(input);
const cwd = hookData.cwd || process.cwd();
// Look for latest handoff
const handoffDir = path.join(cwd, '.claude', 'handoffs');
const latestPath = path.join(handoffDir, 'latest.json');
if (!fs.existsSync(latestPath)) {
console.log(JSON.stringify({
userPrompt: "\n\n[NO PREVIOUS SESSION - Starting fresh]"
}));
process.exit(0);
return;
}
// Read latest handoff reference
const latestInfo = JSON.parse(fs.readFileSync(latestPath, 'utf8'));
if (!latestInfo.latest_handoff) {
console.log(JSON.stringify({
userPrompt: "\n\n[NO PREVIOUS SESSION - Starting fresh]"
}));
process.exit(0);
return;
}
// Read handoff document
const handoffPath = path.join(handoffDir, latestInfo.latest_handoff);
if (!fs.existsSync(handoffPath)) {
logError(handoffDir, 'SessionStart', `Handoff file not found: ${handoffPath}`);
console.log(JSON.stringify({
userPrompt: "\n\n[PREVIOUS HANDOFF NOT FOUND - Starting fresh]"
}));
process.exit(0);
return;
}
const handoff = JSON.parse(fs.readFileSync(handoffPath, 'utf8'));
// Validate handoff
const validation = validateHandoff(handoff);
if (!validation.valid) {
logError(handoffDir, 'SessionStart', `Validation failed: ${validation.errors.join(', ')}`);
console.error(`\n⚠️ SessionStart: Handoff validation failed (${validation.errors.length} errors) - loading anyway`);
}
// Format handoff for injection
let contextInjection = `
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📋 PREVIOUS SESSION HANDOFF
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
**Session ID:** ${handoff.session_id}
**Date:** ${handoff.date}
**Status:** ${handoff.status}
${handoff.session_end_reason ? `**End Reason:** ${handoff.session_end_reason}` : ''}
## Tasks
### Completed
${handoff.tasks.completed.length > 0 ? handoff.tasks.completed.map(t => `- [x] ${t}`).join('\n') : '- None'}
### In Progress
${handoff.tasks.in_progress.length > 0 ? handoff.tasks.in_progress.map(t => `- [ ] ${t}`).join('\n') : '- None'}
### Pending
${handoff.tasks.pending.length > 0 ? handoff.tasks.pending.map(t => `- [ ] ${t}`).join('\n') : '- None'}
### Blockers
${handoff.tasks.blockers.length > 0 ? handoff.tasks.blockers.map(b => `- ⚠️ ${b}`).join('\n') : '- None'}
## Key Decisions
${handoff.decisions.length > 0 ? handoff.decisions.map(d => `- ${d}`).join('\n') : '- None documented'}
## Next Steps
${handoff.next_steps.length > 0 ? handoff.next_steps.map((s, i) => `${i + 1}. ${s}`).join('\n') : '- Not specified'}
${handoff.notes ? `\n## Notes\n${handoff.notes}` : ''}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
**CONTINUITY MODE ACTIVE** - Context preserved from previous session.
To update this handoff, modify: .claude/handoffs/${latestInfo.latest_handoff}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
`;
// Add validation warning if needed
if (!validation.valid) {
contextInjection += `\n\n⚠️ **Warning:** Handoff validation detected ${validation.errors.length} issue(s):\n${validation.errors.map(e => `- ${e}`).join('\n')}\n`;
}
// Output to stdout
console.log(JSON.stringify({
userPrompt: contextInjection
}));
// Write to stderr for visibility
console.error(`\n📋 SessionStart: Loaded handoff from ${latestInfo.latest_handoff}`);
if (!validation.valid) {
console.error(` Validation: ${validation.errors.length} issues detected`);
}
process.exit(0);
} catch (e) {
const cwd = process.cwd();
const handoffDir = path.join(cwd, '.claude', 'handoffs');
logError(handoffDir, 'SessionStart', `Exception: ${e.message}`);
console.error(`\n⚠️ SessionStart hook error: ${e.message}`);
console.log(JSON.stringify({
userPrompt: "\n\n[ERROR LOADING PREVIOUS SESSION - Starting fresh]"
}));
process.exit(0);
}
});These context persistence hooks:
- Work independently of protocol enforcement
- Can be used with or without protocol-enforcer MCP server
- Provide robust error handling and validation
- Maintain non-blocking behavior (never prevent compaction/session operations)
- Create structured JSON handoffs in
.claude/handoffs/ - Automatically cleanup old handoffs (keep last 10)
- Log errors to
.claude/handoffs/errors.logfor debugging
Known Limitation: Hooks cannot access conversation history or internal state. Handoff files will have empty arrays for tasks/decisions/next_steps. Workaround: manually edit handoff files before session ends, or use project documentation (CLAUDE.md) for manual state tracking.
Installation:
- Create
.cursor/hooks/directory - Save these scripts with
.cjsextension - Make executable:
chmod +x .cursor/hooks/*.cjs - Configure in
.claude/settings.json(see Appendix C for format)
MIT License - Copyright (c) 2025 Jason Lusk