Skip to content

Instantly share code, notes, and snippets.

@vishalsachdev
Last active March 6, 2026 01:54
Show Gist options
  • Select an option

  • Save vishalsachdev/2f2a0e339616548bc42a131b95a0eb85 to your computer and use it in GitHub Desktop.

Select an option

Save vishalsachdev/2f2a0e339616548bc42a131b95a0eb85 to your computer and use it in GitHub Desktop.
Claude Code Insights Report — 109 sessions, 97 hours, 3 days of AI-assisted infrastructure & education operations

Claude Code Insights

546 sessions total · 109 analyzed · 1,429 messages · 97h · 32 commits 2026-03-03 to 2026-03-05


At a Glance

What's working: You've built a genuinely impressive operational workflow — deploying Cloudflare Workers, migrating sites, and running live drip campaigns to students, all orchestrated through Claude Code. Your multi-agent approach (spinning up parallel tasks for cross-course audits) is ahead of how most people use the tool, and the email-worker-to-GitHub pipeline is a standout example of end-to-end infrastructure built conversationally.

What's hindering you: On Claude's side, it has a stubborn habit of diving into your codebase when you ask general knowledge questions — you asked about Cloudflare multiple times and got filesystem searches instead of answers. On your side, many of your observer/memory agent sessions produced very little useful output, and infrastructure sessions often hit avoidable mid-stream pivots (wrong SSL config, platform limitations discovered too late) that a quick constraints check upfront would catch.

Quick wins to try: Try creating custom slash commands for your recurring workflows — a /drip-campaign skill that includes your scheduling preferences and dry-run steps, or a /migrate-site skill with the Cloudflare constraints checklist baked in. You could also experiment with headless mode to batch your cross-repo operations (like adding analytics beacons to 17 sites) as a single scripted run instead of interactive sessions.

Ambitious workflows: As models get better at sustained autonomous execution, your 17-site Cloudflare migration becomes a single command — Claude audits each site, executes the migration, verifies DNS/SSL, and only surfaces failures. Your drip campaigns could evolve into a semester-long pipeline that auto-generates weekly content, validates against the academic calendar, dry-runs, and self-deploys with automatic rescheduling for missed windows. Prepare by defining clear success criteria (valid links, correct instructor names, delivery confirmation) so future agents can verify their own work.


Project Areas

Cloudflare Infrastructure Migration (14 sessions)

Migrated multiple GitHub Pages sites to Cloudflare Pages, added Cloudflare Web Analytics beacons, and created per-repo migration plans across ~17 repositories. Claude Code was used to audit existing sites, generate migration plan files, configure Cloudflare Pages deployments, and troubleshoot platform limitations like Git auto-deploy and DNS configuration.

Cloudflare Email Worker Development (5 sessions)

Built and deployed a Cloudflare Email Worker that auto-commits forwarded emails (with attachments) to a GitHub repository. Claude Code handled the full implementation lifecycle from architecture planning through deployment and testing, including pivoting from R2 storage to direct GitHub commits based on user feedback.

Canvas Course Content & Drip Campaigns (8 sessions)

Created and refined Canvas announcements for WhatsApp study companion bots, planned and deployed multi-message drip campaigns for course content delivery, and applied content patterns across courses. Claude Code was used for content drafting, schedule management, VPS deployment, and campaign execution reaching 36 students.

MSBAi Website & Documentation Audit (10 sessions)

Conducted comprehensive UX/content audits of the MSBAi website, updated instructor information, removed sensitive program numbers, reduced documentation duplication, and standardized tech stack references across 6 courses. Claude Code ran parallel audit agents, performed multi-file edits, and deployed updates to Cloudflare Pages.

Canvas MCP Server Deployment & Optimization (6 sessions)

Deployed the Canvas MCP server to production with HTTP transport, configured nginx with SSL and Cloudflare routing, and optimized docstrings across 15 tool files (~350 lines reduced). Claude Code handled server configuration, DNS debugging, documentation updates, and release management including v1.1.0 tagging.


How You Use Claude Code

You are an extraordinarily high-volume operator who runs Claude Code as a persistent infrastructure workhorse — 109 sessions and 97 hours across just 3 days is remarkable intensity. Your workflow revolves around orchestrating multi-agent sessions, frequently spawning observer/memory agents alongside primary sessions to document progress in real-time. This is evident in the many observer agent sessions capturing checkpoints of your Cloudflare migrations, documentation audits, and deployment work. You clearly treat Claude as an autonomous executor: you issue high-level directives like "migrate to Cloudflare Pages," "deploy drip campaign to 36 students," or "run a 10-agent UX audit" and let Claude run with heavy Bash (388 calls) and multi-file editing workflows. The 58 Agent tool calls and 84 TaskUpdate calls confirm you're delegating parallel workstreams rather than micromanaging individual steps.

Your interaction style is iterative and deployment-oriented rather than spec-heavy upfront. You push things to production quickly, then course-correct — like the drip campaign where you revised the schedule after missing the 9 AM slot, or the Cloudflare Worker where you pivoted from R2 storage to GitHub commits mid-session after realizing the access limitation. The 9 instances of "wrong_approach" friction show that you're comfortable letting Claude explore and then redirecting when it goes off track. You do interrupt when needed (e.g., stopping a tool call when Claude tried editing an already-published announcement), but generally you let sessions run long and wrap up with git commits and documentation.

Key pattern: You run Claude Code as a massively parallel operations engine, spawning observer agents for documentation while primary sessions handle deployments, migrations, and content delivery across dozens of repositories simultaneously.


Impressive Things You Did

Full-Stack Cloudflare Infrastructure Deployment

You built and deployed a Cloudflare Email Worker that auto-commits forwarded emails to GitHub, migrated sites from GitHub Pages to Cloudflare Pages, and deployed an MCP server with nginx and SSL at a custom domain. This end-to-end infrastructure work across Workers, Pages, DNS, and email routing shows sophisticated cloud orchestration driven through Claude Code.

Multi-Agent Parallel Documentation Audits

You consistently spun up observer/memory agents alongside primary sessions and launched parallel task execution — like the 6-agent P0+P1 audit that updated tech stacks across 6 courses simultaneously. Your use of the Agent and TaskUpdate tools (142 combined calls) shows you've built a real workflow around coordinated multi-agent sessions for large-scale refactoring.

Automated Student Drip Campaigns

You planned, revised, and deployed timed WhatsApp drip campaigns to students, including rescheduling around missed windows and verifying with dry-runs before launch. The Week 8 campaign successfully reached 36 students, demonstrating how you've integrated Claude Code into a live educational communication pipeline with real operational stakes.


Where Things Go Wrong

Your sessions show a pattern of Claude misinterpreting intent on knowledge questions, excessive use of observer agents with low payoff, and multi-step deployments hitting avoidable configuration detours.

Knowledge Questions Answered with Codebase Searches

When you asked Claude to explain Cloudflare services, it repeatedly searched your local filesystem instead of just answering the question. Try prefixing knowledge questions with something like "Don't look at any files—just explain..." to prevent this.

  • You asked for a Cloudflare explanation at least three separate times, and each time Claude grep'd your repos for Cloudflare references instead of providing the overview you wanted
  • The codebase-exploration approach burned entire sessions without delivering the straightforward explanation you needed, forcing you to re-ask across multiple sessions

Low-Value Observer Agent Overhead

You ran many memory observer agent sessions that produced minimal useful output—hallucinated summaries, near-empty observations, or recordings of routine file reads. Consider reserving observer agents for complex multi-hour sessions rather than short or routine ones.

  • An observer agent hallucinated checkpoint summaries before any real work had occurred in the primary session, producing incorrect records
  • Multiple observer sessions on brief primary sessions (e.g., wrap-ups, single commits) captured almost nothing substantive while still consuming session slots and tokens

Infrastructure Configuration Trial-and-Error

Your deployment and migration sessions hit repeated mid-stream pivots—wrong SSL setup, platform limitations discovered late, and incorrect architecture choices. You could reduce this by having Claude produce a verified checklist of platform constraints before starting implementation.

  • The Cloudflare Pages migration discovered after deployment that Direct Upload projects can't connect to Git, requiring a workaround that could have been caught with upfront research
  • The MCP server deployment started with Let's Encrypt SSL before realizing Cloudflare handles SSL termination, then had to debug Worker routing conflicts and missing DNS records—multiple configuration restarts in one session

Suggestions

CLAUDE.md Additions

1. Answer knowledge questions directly

"When asked a general knowledge question (e.g., 'explain Cloudflare', 'what is X'), answer directly from knowledge first. Do NOT search the local codebase unless the user specifically asks about local usage."

Why: Claude searched local files instead of answering general knowledge questions about Cloudflare in 3+ separate sessions, frustrating the user each time.

2. Avoid duplicate file reads

"Avoid duplicate file reads. If you've already read a file in this session, reference your memory of it rather than reading it again unless the file has been modified since."

Why: Multiple sessions showed excessive re-reads of the same files (README.md, AGENTS.md, index.html read 3+ times each) before making edits.

3. Canvas announcement workflow

"For Canvas announcements: draft the full content BEFORE posting. Once posted, announcements are live and cannot be easily edited. Always confirm content with the user before publishing."

Why: Multiple sessions had friction with Canvas announcements — failed first attempts, post-publish editing issues, and user interruptions during tool calls.

4. Cloudflare platform constraints

"When working with Cloudflare Pages: Direct Upload projects cannot be connected to Git after creation. Plan deployment method (Git vs Direct Upload) before creating the project. Cloudflare SSL termination is preferred over Let's Encrypt when traffic routes through Cloudflare."

Why: Multiple Cloudflare migration sessions hit the same platform limitations around Git connection and SSL configuration, requiring mid-session pivots.

5. VPS deployment checklist

"When deploying to VPS, verify nginx config uses Cloudflare SSL termination (not Let's Encrypt) and check for Cloudflare Worker routing conflicts on subdomains before testing."

Why: The MCP server deployment session hit multiple issues with SSL and Worker routing that could be avoided with a pre-deployment checklist.

Features to Try

Custom Skills — Reusable prompt workflows triggered by a single /command. You frequently do session wrap-ups (5 sessions), git operations (6 sessions), deployment (5 sessions), and progress documentation (5 sessions) — these are perfect candidates for standardized /wrapup, /deploy, and /drip-campaign skills.

Hooks — Auto-run shell commands at lifecycle events like post-edit. With 208 Markdown files and 112 HTML files edited, a post-edit hook could auto-format HTML/Markdown and catch issues before they're committed.

Headless Mode — Run Claude non-interactively from scripts. Your drip campaigns and deployment workflows are becoming repeatable patterns. Headless mode could let you script scheduled deployments or batch site audits across your ~17 GitHub Pages repos without manual interaction.

Usage Pattern Improvements

1. Too many observer/memory agent sessions producing little value

Reduce observer agent usage — most produced "slightly_helpful" results with partial outcomes. Of your 36 analyzed sessions, roughly 15 were observer/memory agent sessions, and nearly all were rated "slightly_helpful" with "partially_achieved" outcomes. Reserve observer agents for long, complex sessions where checkpointing is genuinely needed. For shorter sessions, a /wrapup custom skill would capture the same information more reliably.

2. Wrong-approach friction is your top issue

Ask Claude to state its plan before executing, especially for knowledge questions and infrastructure work. 9 of your friction events were "wrong_approach" — the highest category by far. Adding a "think before acting" instruction to CLAUDE.md would catch these before wasted effort.

3. Consolidate multi-repo operations into scripted workflows

Use batch operations for cross-repo tasks like Cloudflare migration and beacon addition. You're managing ~17 GitHub Pages sites, multiple course repos, and infrastructure across VPS and Cloudflare. A headless mode script or custom skill that iterates across repos would save significant time.


On the Horizon

Your 97 hours across 109 sessions in just 3 days reveals a power user ready to shift from hands-on orchestration to fully autonomous infrastructure and content pipelines.

Autonomous Multi-Site Deployment Pipeline

With 17+ sites being migrated to Cloudflare and frequent deployment friction, Claude could autonomously handle the entire migration lifecycle — auditing each site, executing the migration, verifying DNS/SSL, and running post-deploy checks. Parallel agents could migrate multiple sites simultaneously, each reporting back only on failures or decisions requiring human input.

Self-Correcting Drip Campaign Factory

Your drip campaigns required manual scheduling fixes and content revisions. An autonomous workflow could generate an entire semester's worth of weekly campaigns, validate send times against the academic calendar, dry-run each one, and auto-deploy — iterating against delivery verification tests before marking complete.

Parallel Documentation Audit With Tests

Your 6-agent documentation audit was partially achieved with duplicate file reads and incomplete execution. A test-driven approach would define assertions first — no duplicate content blocks, consistent version numbers, correct instructor names, valid links — then launch parallel agents that each fix violations and iterate until all tests pass.


Fun Finding

User asked Claude to explain what Cloudflare is — Claude spent the entire session grep-ing the local filesystem for Cloudflare references instead of just answering the question. This happened at least THREE separate times across different sessions. It finally answered on the fourth attempt.


Generated by Claude Code /insights · March 2026

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment