The Vercel Sandbox API rejects resources: { vcpus: 1 } with a 400 error, but multiple documentation pages show or imply 1 vCPU as a valid configuration. The actual minimum is 2 vCPUs (even numbers only).
This matters for I/O-bound workloads. We measured CPU utilization inside OpenClaw sandboxes over a 2-hour live test: <2% CPU even at 162 msgs/hr sustained. A 1 vCPU / 2 GB tier would halve memory costs, which are 97-99% of total sandbox cost for this workload.
URL: https://vercel.com/docs/vercel-sandbox/pricing#example-calculations
The "Example calculations" table includes:
Scenario Duration vCPUs Memory Active CPU Cost Memory Cost Total Quick test 2 min 1 2 GB $0.004 $0.001 ~$0.01
This row uses 1 vCPU / 2 GB, implying it's a valid and expected configuration.
URL: https://vercel.com/docs/vercel-sandbox/pricing#resource-limits
Resource Limit Maximum vCPUs per sandbox 8 Memory per vCPU 2 GB Maximum memory per sandbox 16 GB
The table specifies a maximum of 8 vCPUs but documents no minimum. A developer would reasonably assume the minimum is 1.
URL: https://vercel.com/docs/vercel-sandbox/sdk-reference#sandbox.create()
The resources.vcpus parameter is documented as:
Parameter Type Required Details resources.vcpusnumberNo Override CPU count (defaults to plan baseline).
No minimum value, no even-number constraint, no valid range mentioned.
URL: https://vercel.com/docs/vercel-sandbox/sdk-reference#environment-defaults
Resources: Choose the number of virtual CPUs (
vcpus) per sandbox. Pricing and plan limits appear in the Sandbox pricing table.
"Choose the number" implies any number within the max (8) is valid.
// From sandbox.d.ts
/**
* Resources to allocate to the sandbox.
*
* Your sandbox will get the amount of vCPUs you specify here and
* 2048 MB of memory per vCPU.
*/
resources?: {
vcpus: number;
};The JSDoc says "the amount of vCPUs you specify" with no constraint. Type is plain number.
vcpus: 1 → HTTP 400:
{"error":{"code":"bad_request","message":"Invalid request: `resources.vcpus` should be >= 2."}}vcpus: 3 → HTTP 400:
{"error":{"code":"bad_request","message":"`resources.vcpus` should be even"}}| vcpus | Result | Actual CPUs | Actual Memory | Error |
|---|---|---|---|---|
| 1 | 400 | — | — | resources.vcpus should be >= 2 |
| 2 | OK | 2 | 4.2 GB | — |
| 3 | 400 | — | — | resources.vcpus should be even |
| 4 | OK | 4 | 8.2 GB | — |
| 8 | OK | 8 | 16.3 GB | — |
| (omitted) | OK | 2 | 4.2 GB | — |
Undocumented constraints:
- Minimum is 2 vCPUs (not 1)
- Must be an even number
- Valid values: 2, 4, 6, 8
mkdir testing-sandbox-cpu && cd testing-sandbox-cpu
pnpm init && pnpm add @vercel/sandbox
vercel link --scope vercel-labs --yes
node test-vcpus.mjsimport { Sandbox } from '@vercel/sandbox';
const vcpuValues = [1, 2, 3, 4, 8];
for (const vcpus of vcpuValues) {
console.log(`\nTesting vcpus: ${vcpus}`);
try {
const sandbox = await Sandbox.create({
ports: [3000],
timeout: 60_000,
resources: { vcpus },
});
const nproc = await sandbox.runCommand('nproc');
const meminfo = await sandbox.runCommand('sh', ['-c', 'head -1 /proc/meminfo']);
console.log(` OK: ${(await nproc.output('stdout')).trim()} CPUs, ${(await meminfo.output('stdout')).trim()}`);
await sandbox.stop();
} catch (err) {
console.log(` FAILED: ${err.json?.error?.message || err.message}`);
}
}We measured CPU utilization inside OpenClaw AI gateway sandboxes (2x CPU / 4 GiB) over a 2-hour live test with real chat traffic:
| Persona | Traffic | CPU active % | Load avg 1m |
|---|---|---|---|
| Light (idle) | 1.5 msg/hr | 0.43% | 0.050 |
| Medium | 17 msg/hr | 0.21% | 0.006 |
| Heavy | 162 msg/hr | 1.58% | 0.064 |
The gateway is almost entirely I/O-bound — it waits ~5 seconds per message for the upstream LLM. Even at 162 msgs/hr sustained, CPU barely registers.
Memory provisioning dominates cost (97-99% of total). Allowing vcpus: 1 would halve it:
| Config | Memory $/hr | Always-on monthly |
|---|---|---|
| 2 vCPU / 4 GB (current min) | $0.168 | $120.96 |
| 1 vCPU / 2 GB (if available) | $0.084 | $60.48 |
- Pricing page (/docs/vercel-sandbox/pricing): Update the example table to start at 2 vCPUs, or enable 1 vCPU in the API. Add a "Minimum vCPUs: 2" row to the resource limits table.
- SDK reference (/docs/vercel-sandbox/sdk-reference): Add valid range to
resources.vcpusdescription: "Even number, 2–8." - TypeScript types: Consider
vcpus: 2 | 4 | 6 | 8instead ofnumber.
SDK version tested: @vercel/sandbox@1.8.0 (2026-03-04)