Created
February 18, 2026 15:38
-
-
Save andrewimm/e6cfdf930ea800ef6a139da89069506f to your computer and use it in GitHub Desktop.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| ▐▛███▜▌ Claude Code v2.1.45 | |
| ▝▜█████▛▘ Opus 4.6 · Claude Max | |
| ▘▘ ▝▝ ~/briefly | |
| ❯ /spaa I just did a Next.js turbopack trace of my "next build" compilation. The | |
| results are at .next/trace.spaa | |
| Can you give me some insights into where time is being spent in compilation? | |
| ● Let me analyze the SPAA trace file from your Next.js build. | |
| ● Searched for 1 pattern, read 2 files (ctrl+o to expand) | |
| ● The file isn't at .next/trace.spaa. Let me check the actual location — I | |
| noticed trace.spaa in the repo root from the git status. | |
| ● Searched for 1 pattern, read 2 files (ctrl+o to expand) | |
| ● Good, the data looks rich. Let me now get the top hotspots and resolve frame | |
| names. | |
| ● Now let me get more detail on the top callers and allocation hotspots. | |
| ● Now I have a complete picture. Here's the analysis: | |
| --- | |
| Next.js Turbopack Build Profile Analysis | |
| Build duration: 4.7s wall clock | Total CPU self-time: 26.6s across 33,918 | |
| stacks (good parallelism across cores) | |
| Top Time Consumers (Exclusive Self-Time) | |
| Rank: 1 | |
| Operation: Minify ECMAScript code | |
| Self Time: 4,525ms (17%) | |
| Alloc: 827 MB | |
| Occurrences: 123 | |
| ──────────────────────────────────────── | |
| Rank: 2 | |
| Operation: Generate source maps | |
| Self Time: 1,221ms (4.6%) | |
| Alloc: 1,130 MB | |
| Occurrences: 1,209 | |
| ──────────────────────────────────────── | |
| Rank: 3 | |
| Operation: Precompute code generation | |
| Self Time: 852ms (3.2%) | |
| Alloc: 333 MB | |
| Occurrences: 1,269 | |
| ──────────────────────────────────────── | |
| Rank: 4 | |
| Operation: Emit .nft.json asset | |
| Self Time: 644ms (2.4%) | |
| Alloc: 103 MB | |
| Occurrences: 1 | |
| ──────────────────────────────────────── | |
| Rank: 5 | |
| Operation: Compute async module info | |
| Self Time: 577ms (2.2%) | |
| Alloc: 73 MB | |
| Occurrences: 3 | |
| ──────────────────────────────────────── | |
| Rank: 6 | |
| Operation: Generate merged code | |
| Self Time: 503ms (1.9%) | |
| Alloc: 231 MB | |
| Occurrences: 34 | |
| ──────────────────────────────────────── | |
| Rank: 7 | |
| Operation: Emitting | |
| Self Time: 375ms (1.4%) | |
| Alloc: 52 MB | |
| Occurrences: 1 | |
| ──────────────────────────────────────── | |
| Rank: 8 | |
| Operation: Emit code | |
| Self Time: 372ms (1.4%) | |
| Alloc: 114 MB | |
| Occurrences: 34 | |
| ──────────────────────────────────────── | |
| Rank: 9 | |
| Operation: Node.js evaluation | |
| Self Time: 327ms (1.2%) | |
| Alloc: 0 MB | |
| Occurrences: 3 | |
| ──────────────────────────────────────── | |
| Rank: 10 | |
| Operation: Analyze app-page-turbo-experimental.runtime.prod.js | |
| Self Time: 229ms | |
| Alloc: 115 MB | |
| Occurrences: 1 | |
| Key Insights | |
| 1. Minification is the single biggest bottleneck (17% of CPU time) | |
| Turbopack's SWC minifier spent 4.5s of CPU time across 123 chunks. This is | |
| expected for production builds but dominates everything else. If you're | |
| iterating on build speed, consider: | |
| - Using swcMinify: false temporarily during testing (not for prod) | |
| - Checking if you have unusually large bundles being minified | |
| 2. Source map generation is the #1 memory consumer (1.1 GB allocated) | |
| Despite being #2 in time (1.2s), source maps allocated the most memory at | |
| 1,130 MB across 1,209 invocations. If build memory is a concern, you could | |
| disable source maps in next.config.ts with productionBrowserSourceMaps: false. | |
| 3. The .nft.json trace file emission is surprisingly expensive (644ms) | |
| A single next-server.js.nft.json emit took 644ms and 103 MB. This is the | |
| Node.js file tracing output — it's scanning your dependency tree to determine | |
| which files are needed at runtime. | |
| 4. Heavy node_modules being analyzed | |
| Several large vendored files are individually expensive to parse/analyze: | |
| - app-page-turbo-experimental.runtime.prod.js — 229ms + 115 MB | |
| - app-page-turbo.runtime.prod.js — 174ms + 112 MB | |
| - react-dom-client.production.js — 172ms + 79 MB | |
| - ai SDK (ai/dist/index.mjs) — analysis + parsing ~135ms | |
| - jsonwebtoken — 10 MB live memory during analysis | |
| 5. Memory retention (potential build memory pressure) | |
| The top live (un-freed) memory holders during build: | |
| - precompute code generation: 189 MB retained | |
| - generate source maps: 57 MB retained | |
| - .nft.json emit: 27 MB retained | |
| - async module info: 26 MB retained | |
| 6. External fetches add latency | |
| Google Fonts requests are happening during build (fonts.googleapis.com + | |
| fonts.gstatic.com). These are network-bound and add ~100ms+ each. Consider | |
| using next/font with display: swap pre-downloaded locally if build speed | |
| matters in CI. | |
| Summary | |
| Your build is well-parallelized (26.6s CPU compressed into 4.7s wall clock = | |
| ~5.6x parallelism). The main costs are inherent to production builds: | |
| minification (17%), source maps (4.6%), and code generation (3.2%). There's no | |
| single pathological hotspot — time is distributed across many operations. The | |
| biggest actionable wins would be disabling source maps if you don't need | |
| them, and checking if the ai SDK bundle size can be reduced (tree-shaking). | |
| ✻ Baked for 1m 58s | |
| ❯ Can you show me the top 10 most expensive modules, excluding the | |
| turbopack runtime and react? | |
| ● Here are the top 10 most expensive modules (excluding Next.js runtime and | |
| React): | |
| Rank: 1 | |
| Module: ai (ai/dist/index.mjs) | |
| Operation: analyze [app-route] | |
| Self Time: 70ms | |
| Alloc: 39 MB | |
| ──────────────────────────────────────── | |
| Rank: 2 | |
| Module: ai (ai/dist/index.mjs) | |
| Operation: analyze [app-rsc] | |
| Self Time: 62ms | |
| Alloc: 39 MB | |
| ──────────────────────────────────────── | |
| Rank: 3 | |
| Module: ai (ai/dist/index.mjs) | |
| Operation: parse | |
| Self Time: 49ms | |
| Alloc: 15 MB | |
| ──────────────────────────────────────── | |
| Rank: 4 | |
| Module: @neondatabase/serverless (index.mjs) | |
| Operation: analyze [app-rsc] | |
| Self Time: 46ms | |
| Alloc: 28 MB | |
| ──────────────────────────────────────── | |
| Rank: 5 | |
| Module: @neondatabase/serverless (index.mjs) | |
| Operation: parse | |
| Self Time: 46ms | |
| Alloc: 13 MB | |
| ──────────────────────────────────────── | |
| Rank: 6 | |
| Module: zod (v3/types.js) | |
| Operation: analyze [app-rsc] | |
| Self Time: 35ms | |
| Alloc: 11 MB | |
| ──────────────────────────────────────── | |
| Rank: 7 | |
| Module: zod (v3/types.js) | |
| Operation: analyze [app-route] | |
| Self Time: 26ms | |
| Alloc: 11 MB | |
| ──────────────────────────────────────── | |
| Rank: 8 | |
| Module: lib/workflows/inbound-email.ts | |
| Operation: parse | |
| Self Time: 24ms | |
| Alloc: 3 MB | |
| ──────────────────────────────────────── | |
| Rank: 9 | |
| Module: zod (v3/types.js) | |
| Operation: parse | |
| Self Time: 16ms | |
| Alloc: 5 MB | |
| ──────────────────────────────────────── | |
| Rank: 10 | |
| Module: zod (v4/core/schemas.js) | |
| Operation: analyze [app-route] | |
| Self Time: 16ms | |
| Alloc: 8 MB | |
| Takeaways | |
| - ai SDK dominates — 181ms total (70+62+49) and 93 MB allocated. It ships a | |
| single large index.mjs bundle that gets analyzed twice (once for app-route, | |
| once for app-rsc) and parsed once. If you're only using a few functions from | |
| it, there may not be much you can do — the SDK bundles everything into one | |
| file. | |
| - @neondatabase/serverless is #2 at 92ms and 41 MB. Same pattern — single | |
| large index.mjs entry point. | |
| - zod appears 4 times across v3 and v4. Combined: 93ms and 35 MB. Notably, | |
| both v3/types.js and v4/core/schemas.js are being processed — you might be | |
| pulling in both Zod v3 and v4 compat layers. If you can migrate fully to v4 | |
| imports, you could eliminate the v3 analysis. | |
| - Your own code (lib/workflows/inbound-email.ts) is the only first-party file | |
| in the top 10 at 24ms. That file may be unusually large or have many imports | |
| worth checking. | |
| ✻ Cogitated for 45s | |
| ❯ You have access to the repo, feel free to look at inbound-email.ts | |
| ● Read 1 file (ctrl+o to expand) | |
| ● At 465 lines, the file itself isn't particularly large. The 24ms parse cost is | |
| likely driven by its import tree rather than the file itself — it pulls in: | |
| - @/lib/db (drizzle schema) | |
| - drizzle-orm | |
| - resend | |
| - @/lib/email/forward-parser | |
| - @/lib/email/domain-utils | |
| - @/lib/company/resolution | |
| - @/lib/company/state | |
| - @/lib/llm/analyze | |
| Each of those transitively brings in more dependencies (drizzle-orm, the | |
| Resend SDK, your LLM analyze module which likely imports the ai SDK, etc.). | |
| Since turbopack traces attribute parse time to the entry file, the 24ms/3MB is | |
| really the cost of parsing this single .ts file's AST — not its dependency | |
| graph. That's actually reasonable for a 465-line TypeScript file with type | |
| annotations and string literals. | |
| Bottom line: Nothing pathological here. It's just the largest first-party file | |
| showing up in the trace. The real cost drivers in your build are all | |
| third-party: ai, @neondatabase/serverless, and zod. | |
| ❯ Can you aggregate the costs by package name instead of individual files? | |
| ● Here's the full picture by package: | |
| ┌──────┬──────────────────────────┬───────────┬──────────┬───────┐ | |
| │ Rank │ Package │ Self Time │ Alloc │ Files │ | |
| ├──────┼──────────────────────────┼───────────┼──────────┼───────┤ | |
| │ 1 │ next │ 3,210ms │ 1,172 MB │ 2,980 │ | |
| ├──────┼──────────────────────────┼───────────┼──────────┼───────┤ | |
| │ 2 │ zod │ 439ms │ 174 MB │ 496 │ | |
| ├──────┼──────────────────────────┼───────────┼──────────┼───────┤ | |
| │ 3 │ react-dom │ 420ms │ 166 MB │ 30 │ | |
| ├──────┼──────────────────────────┼───────────┼──────────┼───────┤ | |
| │ 4 │ undici │ 199ms │ 66 MB │ 192 │ | |
| ├──────┼──────────────────────────┼───────────┼──────────┼───────┤ | |
| │ 5 │ svix │ 190ms │ 54 MB │ 832 │ | |
| ├──────┼──────────────────────────┼───────────┼──────────┼───────┤ | |
| │ 6 │ ai │ 181ms │ 92 MB │ 4 │ | |
| ├──────┼──────────────────────────┼───────────┼──────────┼───────┤ | |
| │ 7 │ drizzle-orm │ 172ms │ 56 MB │ 424 │ | |
| ├──────┼──────────────────────────┼───────────┼──────────┼───────┤ | |
| │ 8 │ @opentelemetry/api │ 113ms │ 30 MB │ 374 │ | |
| ├──────┼──────────────────────────┼───────────┼──────────┼───────┤ | |
| │ 9 │ @neondatabase/serverless │ 93ms │ 41 MB │ 4 │ | |
| ├──────┼──────────────────────────┼───────────┼──────────┼───────┤ | |
| │ 10 │ @workflow/core │ 39ms │ 13 MB │ 60 │ | |
| └──────┴──────────────────────────┴───────────┴──────────┴───────┘ | |
| Observations | |
| - next itself is 57% of all module parse/analyze time (3.2s). That's 2,980 | |
| files — the Next.js runtime is just large. Nothing you can do about that. | |
| - svix is worth noting — 190ms across 832 files. That's a lot of individual | |
| modules being analyzed for a webhooks SDK. If you're only using a small part | |
| of its API, it might be worth checking if there's a lighter alternative or if | |
| you can limit what gets traced. | |
| - zod at 496 files confirms it's pulling in both v3 and v4 modules. Migrating | |
| fully to v4 imports could cut this roughly in half. | |
| - undici (192 files, 199ms) is Node's HTTP client — likely pulled in | |
| transitively by resend or another SDK. | |
| - @opentelemetry/api (374 files, 113ms) is a lot of files for a tracing API. | |
| It's probably a transitive dep from Next.js or Vercel functions. | |
| - Your first-party code ([project]/*) totals ~85ms across ~100 files — very | |
| modest compared to dependencies. | |
| ✻ Baked for 50s | |
| ❯ Can you show me what's pulling in undici? | |
| ● Every single stack is the same import chain: | |
| app/.well-known/workflow/v1/flow/route.js | |
| └─ @workflow/core/dist/runtime.js | |
| └─ @workflow/core/dist/runtime/world.js | |
| └─ @workflow/world-local/dist/index.js | |
| └─ @workflow/world-local/dist/queue.js | |
| └─ undici/index.js ← pulls in all 192 undici files | |
| @workflow/world-local is the sole importer of undici. Its queue.js module | |
| imports undici/index.js (the full package entry point), which drags in the | |
| entire HTTP client — fetch, websockets, cache storage, and all. | |
| This costs 199ms and 66 MB just for module analysis. Combined with | |
| @workflow/core (39ms) and @workflow/world-local itself (19ms), the workflow | |
| SDK accounts for ~257ms of build time. If @workflow/world-local only needs | |
| fetch from undici, that's a lot of dead code being parsed — but since it's a | |
| third-party dependency, there's not much you can do besides flagging it | |
| upstream. | |
| ✻ Baked for 35s | |
| ❯ Do i want to know how much time is spent analyzing workflow? | |
| ● Yeah... 2,743ms — that's nearly half of your total module analysis budget | |
| (5,658ms), and over half a second of wall-clock time. | |
| The workflow SDK's transitive dependency tree is pulling in: | |
| Package: zod | |
| Time: 849ms | |
| Why: Workflow schemas (this is likely most of your zod cost) | |
| ──────────────────────────────────────── | |
| Package: undici | |
| Time: 369ms | |
| Why: @workflow/world-local importing the full HTTP client | |
| ──────────────────────────────────────── | |
| Package: @workflow/* | |
| Time: 277ms | |
| Why: The SDK itself across core/world-local/world-vercel/world | |
| ──────────────────────────────────────── | |
| Package: ai + @ai-sdk/* | |
| Time: 190ms | |
| Why: AI SDK pulled in through workflow | |
| ──────────────────────────────────────── | |
| Package: Node.js eval + startup | |
| Time: 269ms | |
| Why: Runtime bootstrap for workflow | |
| ──────────────────────────────────────── | |
| Package: @vercel/oidc + functions | |
| Time: 60ms | |
| Why: Vercel platform deps | |
| ──────────────────────────────────────── | |
| Package: misc (seedrandom, devalue, debug...) | |
| Time: 88ms | |
| Why: | |
| That single app/.well-known/workflow/v1/flow/route.js route is responsible for | |
| 2.7 seconds of CPU time and 547 MB of allocations during your build. For | |
| context, that's more than react-dom (420ms) and next framework analysis | |
| outside of workflow (3,210ms - 85ms = 3,125ms) combined... almost. | |
| If the workflow route isn't critical to every build iteration, lazy-loading it | |
| or moving it behind a dynamic import could meaningfully speed things up. | |
| ✻ Worked for 39s | |
| ❯ Okay, so what can I do to make my compile time faster? | |
| ● Based on everything we've seen, here are the actionable items ranked by | |
| impact: | |
| High Impact | |
| 1. Lazy-load the workflow route | |
| The app/.well-known/workflow/v1/flow/route.js route and its transitive deps | |
| account for 2.7s of CPU (48% of module analysis). If this route can use | |
| next/dynamic or dynamic import() for the @workflow/core runtime, it would | |
| defer parsing of undici, zod, ai-sdk, seedrandom, and all the other transitive | |
| deps until runtime instead of build time. | |
| 2. Disable source maps for dev iteration | |
| Source map generation was 1.2s CPU and 1.1 GB of allocations — the single | |
| largest memory consumer. In next.config.ts: | |
| productionBrowserSourceMaps: false, // already the default, but verify | |
| If you're running next build frequently during development, this matters. | |
| Medium Impact | |
| 3. Migrate off Zod v3 compat | |
| Zod appeared as 496 files / 439ms total, and the trace shows both v3/types.js | |
| and v4/core/schemas.js being analyzed. If you can move all imports to zod/v4 | |
| (or just zod if v4 is the default export), you'd drop the v3 analysis entirely | |
| — likely saving ~200ms. | |
| 4. Check if @workflow/world-local is needed in production builds | |
| This single package pulls in all of undici (192 files). If world-local is only | |
| for local dev and you use @workflow/world-vercel in production, you could | |
| conditionally import it or split it to a separate entry point. | |
| Low Impact (but free) | |
| 5. Google Fonts fetch during build | |
| The trace shows network requests to fonts.googleapis.com during compilation | |
| (~100ms+). If you're already using next/font/google, this is expected — but in | |
| CI without caching, it adds latency on every build. | |
| Not worth pursuing | |
| - Minification (4.5s) — inherent to production builds, no way around it | |
| - next framework analysis (3.2s) — the framework itself, can't change it | |
| - react-dom (420ms) — necessary | |
| - Your first-party code (~85ms) — already very lean | |
| The workflow route is the single biggest win by far. Everything else is | |
| incremental. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment