Skip to content

Instantly share code, notes, and snippets.

@johnlindquist
Last active March 4, 2026 22:03
Show Gist options
  • Select an option

  • Save johnlindquist/5aaaed7ee818de6c7b3dc7bc83bc147d to your computer and use it in GitHub Desktop.

Select an option

Save johnlindquist/5aaaed7ee818de6c7b3dc7bc83bc147d to your computer and use it in GitHub Desktop.
Vercel Sandbox: vcpus=1 rejected by API despite docs showing it as valid

Vercel Sandbox: vcpus: 1 rejected by API (docs show it as valid)

Summary

The Vercel Sandbox API rejects resources: { vcpus: 1 } with a 400 error, but multiple documentation pages show or imply 1 vCPU as a valid configuration. The actual minimum is 2 vCPUs (even numbers only).

This matters for I/O-bound workloads. We measured CPU utilization inside OpenClaw sandboxes over a 2-hour live test: <2% CPU even at 162 msgs/hr sustained. A 1 vCPU / 2 GB tier would halve memory costs, which are 97-99% of total sandbox cost for this workload.


Documentation references that show/imply 1 vCPU is valid

1. Pricing page — example table shows 1 vCPU

URL: https://vercel.com/docs/vercel-sandbox/pricing#example-calculations

The "Example calculations" table includes:

Scenario Duration vCPUs Memory Active CPU Cost Memory Cost Total
Quick test 2 min 1 2 GB $0.004 $0.001 ~$0.01

This row uses 1 vCPU / 2 GB, implying it's a valid and expected configuration.

2. Pricing page — resource limits table has no minimum

URL: https://vercel.com/docs/vercel-sandbox/pricing#resource-limits

Resource Limit
Maximum vCPUs per sandbox 8
Memory per vCPU 2 GB
Maximum memory per sandbox 16 GB

The table specifies a maximum of 8 vCPUs but documents no minimum. A developer would reasonably assume the minimum is 1.

3. SDK reference — Sandbox.create() documents no constraint

URL: https://vercel.com/docs/vercel-sandbox/sdk-reference#sandbox.create()

The resources.vcpus parameter is documented as:

Parameter Type Required Details
resources.vcpus number No Override CPU count (defaults to plan baseline).

No minimum value, no even-number constraint, no valid range mentioned.

4. SDK reference — "Environment defaults" section

URL: https://vercel.com/docs/vercel-sandbox/sdk-reference#environment-defaults

Resources: Choose the number of virtual CPUs (vcpus) per sandbox. Pricing and plan limits appear in the Sandbox pricing table.

"Choose the number" implies any number within the max (8) is valid.

5. TypeScript type definitions (@vercel/sandbox@1.8.0)

// From sandbox.d.ts
/**
 * Resources to allocate to the sandbox.
 *
 * Your sandbox will get the amount of vCPUs you specify here and
 * 2048 MB of memory per vCPU.
 */
resources?: {
    vcpus: number;
};

The JSDoc says "the amount of vCPUs you specify" with no constraint. Type is plain number.


Actual API behavior

Exact error responses

vcpus: 1 → HTTP 400:

{"error":{"code":"bad_request","message":"Invalid request: `resources.vcpus` should be >= 2."}}

vcpus: 3 → HTTP 400:

{"error":{"code":"bad_request","message":"`resources.vcpus` should be even"}}

Full test matrix

vcpus Result Actual CPUs Actual Memory Error
1 400 resources.vcpus should be >= 2
2 OK 2 4.2 GB
3 400 resources.vcpus should be even
4 OK 4 8.2 GB
8 OK 8 16.3 GB
(omitted) OK 2 4.2 GB

Undocumented constraints:

  1. Minimum is 2 vCPUs (not 1)
  2. Must be an even number
  3. Valid values: 2, 4, 6, 8

Reproduction

mkdir testing-sandbox-cpu && cd testing-sandbox-cpu
pnpm init && pnpm add @vercel/sandbox
vercel link --scope vercel-labs --yes
node test-vcpus.mjs

test-vcpus.mjs

import { Sandbox } from '@vercel/sandbox';

const vcpuValues = [1, 2, 3, 4, 8];

for (const vcpus of vcpuValues) {
  console.log(`\nTesting vcpus: ${vcpus}`);
  try {
    const sandbox = await Sandbox.create({
      ports: [3000],
      timeout: 60_000,
      resources: { vcpus },
    });
    const nproc = await sandbox.runCommand('nproc');
    const meminfo = await sandbox.runCommand('sh', ['-c', 'head -1 /proc/meminfo']);
    console.log(`  OK: ${(await nproc.output('stdout')).trim()} CPUs, ${(await meminfo.output('stdout')).trim()}`);
    await sandbox.stop();
  } catch (err) {
    console.log(`  FAILED: ${err.json?.error?.message || err.message}`);
  }
}

Cost impact

We measured CPU utilization inside OpenClaw AI gateway sandboxes (2x CPU / 4 GiB) over a 2-hour live test with real chat traffic:

Persona Traffic CPU active % Load avg 1m
Light (idle) 1.5 msg/hr 0.43% 0.050
Medium 17 msg/hr 0.21% 0.006
Heavy 162 msg/hr 1.58% 0.064

The gateway is almost entirely I/O-bound — it waits ~5 seconds per message for the upstream LLM. Even at 162 msgs/hr sustained, CPU barely registers.

Memory provisioning dominates cost (97-99% of total). Allowing vcpus: 1 would halve it:

Config Memory $/hr Always-on monthly
2 vCPU / 4 GB (current min) $0.168 $120.96
1 vCPU / 2 GB (if available) $0.084 $60.48

Suggested doc fixes

  1. Pricing page (/docs/vercel-sandbox/pricing): Update the example table to start at 2 vCPUs, or enable 1 vCPU in the API. Add a "Minimum vCPUs: 2" row to the resource limits table.
  2. SDK reference (/docs/vercel-sandbox/sdk-reference): Add valid range to resources.vcpus description: "Even number, 2–8."
  3. TypeScript types: Consider vcpus: 2 | 4 | 6 | 8 instead of number.

SDK version tested: @vercel/sandbox@1.8.0 (2026-03-04)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment