Skip to content

Instantly share code, notes, and snippets.

View johnlindquist's full-sized avatar
💭
Educating 👨🏻‍🏫

John Lindquist johnlindquist

💭
Educating 👨🏻‍🏫
View GitHub Profile
@johnlindquist
johnlindquist / vercel-sandbox-vcpu-minimum.md
Last active March 4, 2026 22:03
Vercel Sandbox: vcpus=1 rejected by API despite docs showing it as valid

Vercel Sandbox: vcpus: 1 rejected by API (docs show it as valid)

Summary

The Vercel Sandbox API rejects resources: { vcpus: 1 } with a 400 error, but multiple documentation pages show or imply 1 vCPU as a valid configuration. The actual minimum is 2 vCPUs (even numbers only).

This matters for I/O-bound workloads. We measured CPU utilization inside OpenClaw sandboxes over a 2-hour live test: <2% CPU even at 162 msgs/hr sustained. A 1 vCPU / 2 GB tier would halve memory costs, which are 97-99% of total sandbox cost for this workload.


@johnlindquist
johnlindquist / oidc-writeup.md
Last active March 4, 2026 22:36
Vercel OIDC for AI Gateway: Zero-Config Authentication Writeup

Vercel OIDC for AI Gateway: Zero-Config Authentication

Problem

OpenClaw sandboxes on Vercel needed a manually-configured AI_GATEWAY_API_KEY environment variable to authenticate with Vercel's AI Gateway for LLM inference. This was a deployment friction point — every new deployment or team member needed the key configured.

Solution

Replaced the static API key with Vercel's OIDC (OpenID Connect) token federation. The dashboard's Vercel Functions automatically receive an OIDC JWT that proves they're running on Vercel infrastructure. AI Gateway accepts this JWT as a bearer token — no static key needed.

@johnlindquist
johnlindquist / COMPARISON-GRID.md
Created March 3, 2026 00:47
Open Plugin - Plugin Feature Comparison Grid (8 tools + Open Plugin)

Plugin Feature Comparison Grid

Generated: 2026-03-02 Source: GPT-5.2 Pro (via Oracle) + web research (March 2026) Tools compared: Claude Code, Cursor, Codex CLI, OpenCode, Gemini CLI, Windsurf, VS Code / Copilot, Aider

Legend: ✅ full support | 🟡 partial/limited | ❌ no support


@johnlindquist
johnlindquist / REVIEW.md
Created March 3, 2026 00:47
Open Plugin - Full Project Review (GPT-5.2 Pro via Oracle)

Open Plugin Project Review

Reviewed: 2026-03-02 Reviewer: GPT-5.2 Pro (via Oracle) + Claude Opus 4.6 Scope: Full project review against Claude Code's plugin system for cross-harness viability


1) Gap Analysis vs Claude Code

@johnlindquist
johnlindquist / v0-sandbox-gap.md
Created February 28, 2026 01:03
v0 Platform API: File uploads don't get full-stack sandbox previews

v0 Platform API: File uploads don't get full-stack sandbox previews

The Problem

When publishing Next.js apps via the v0 Platform API using chats.init({ type: "files" }), the preview (demo-*.vusercontent.net) does not execute API routes, Server Actions, or any server-side code. Every request — including POST /api/my-route — returns the page HTML instead.

This makes it impossible to build interactive demos that use workflow, database connections, or any backend logic when publishing via the Platform API.

The same app connected via chats.init({ type: "repo" }) gets a real Vercel Sandbox where API routes work, workflows execute, and server-side code runs as expected.

@johnlindquist
johnlindquist / workflow-devkit-demos.md
Created February 25, 2026 20:29
Workflow DevKit Interactive Demos — 21 patterns on v0

Workflow DevKit Interactive Demos

Interactive demos for Vercel Workflow DevKit — each showcasing a different durable workflow pattern built with Next.js 16, React 19, and the workflow package.

# Demo Pattern v0 Link
1 Approval Gate Human-in-the-loop with timeout Open in v0
2 Onboarding Drip 7-day email drip in 6 lines Open in v0
3 Saga Step 3 fails — compensating rollbacks Open in v0
4 Cancellable Export Cancel a batch mid-flight Open in v0
@johnlindquist
johnlindquist / 10-tier-optimizations.md
Created February 24, 2026 15:50
Turborepo Performance Optimization Plans (17 research iterations with Oracle/GPT-5.2 sessions inlined)

Current State

Turborepo has already landed the big algorithmic win — topological-wave parallel task hashing (commit b3c0f46da8) — but several implementation details are now the bottleneck:

  • Parallelism is re-serialized by shared locks + deep clones: TaskHashTrackerState uses one RwLock guarding 6 independent HashMaps (crates/turborepo-task-hash/src/lib.rs:226-242), so any insert_hash() write blocks all concurrent reads. The visitor's wave-hashing funnels results through Arc<Mutex<HashMap>> with per-wave serialization (crates/turborepo-lib/src/task_graph/visitor/mod.rs:233-303).

  • Allocation hot-loops dominate hash & dispatch: DetailedMap (3 nested HashMaps) deep-cloned on every env_vars() read (line 622). EnvironmentVariableMap cloned per task at line 388. Repeated regex compilations for env patterns (4 per task in hashable_task_env) and glob patterns (O(packages) recompilations). SCM hashing allocates 10,000+ hex Strings inside parallel loops.

  • **I/O paths lea

@johnlindquist
johnlindquist / openclaw-cron-pairing-report.md
Created February 20, 2026 16:19
OpenClaw Cron in Vercel Sandbox: The Pairing Wall — Root Cause Analysis & Fix

OpenClaw Cron in Vercel Sandbox: The Pairing Wall

The User Story

You deploy OpenClaw (an AI gateway) inside a Vercel Sandbox and connect it to Telegram. Users chat with the bot. Everything works great — until someone says:

"Text me a silly joke every 5 minutes"

The bot tries to create a scheduled task (a cron job). It fails:

Status Update

tl;dr for boss

Locked in

  • OpenClaw on Vercel — all-in, nights/weekends, building messaging channels + dashboard
  • Workflow — 5 PRs, changelog, and X campaign demos nearly done; hitting weekly goals
  • ADX Club — will make time, sharing knowledge is non-negotiable for me
  • Open Plugin Spec — strong fit, already have the spec/alignment/leaderboard plan, ready to engage partners

Reconnecting Messaging Users to Stopped OpenClaw Sandboxes

tl;dr

  • Sandboxes stop after 10min idle — when a Telegram/Slack user sends a message, the bot is dead until it restarts
  • Sandbox URLs change on every restore, but our stable subdomain proxy ({key}.basedomain.com) already solves that
  • Option A (simpler): Enable OpenClaw's built-in channels — remove --skip-channels, use long polling mode so the Gateway polls Telegram itself. No webhook URLs needed. Downside: bot appears offline when sandbox is stopped, no way to send "booting up..." messages to users
  • Option B (better UX): Build our own webhook adapter — keep --skip-channels, add /api/webhooks/telegram on our always-on Next.js app. Queue messages in Redis, trigger restore, send "booting up..." progress messages, drain queue once sandbox is live. More code but full control over cold-start experience
  • Either way we need a wake-up mechanism to restore stopped sandboxes when messages arrive
  • OpenClaw natively supports 14+ platfor