2026-03-09T09:23:05Z by Showboat dev
Create a single-node CockroachDB cluster with OpenTelemetry and fluent-bit for Datadog observability, then run a KV workload targeting ~20-25% CPU.
export c=tobias-gcassist && roachprod create -n 1 --local-ssd=false --gce-pd-volume-count 1 --gce-machine-type n2-standard-16 --gce-pd-volume-size 200 $cWARN: running insecure mode
2026/03/09 09:23:23 roachprod.go:1782: Creating cluster tobias-gcassist with 1 nodes...
2026/03/09 09:23:38 create_sdk.go:238: BulkInsert: successfully created 1 instances in zone us-east1-d
2026/03/09 09:23:54 roachprod.go:1803: Created cluster tobias-gcassist; setting up SSH...
Name DNS Private IP Public IP Machine Type CPU Arch CPU Family Provision Model
tobias-gcassist-0001 tobias-gcassist-0001.us-east1-d.cockroach-ephemeral 10.142.2.95 35.237.72.102 n2-standard-16 amd64 ice lake ondemand
roachprod opentelemetry-start tobias-gcassist --datadog-api-key=$DD_API_KEYroachprod fluent-bit-start tobias-gcassist --datadog-api-key=$DD_API_KEYroachprod stage tobias-gcassist cockroachroachprod start tobias-gcassist --enable-fluent-sink1KB blocks, 50% reads (20% of which are sequential scans), 128 concurrency, 5000 max ops/s.
roachprod run tobias-gcassist -- ./cockroach workload init kv --splits 100 --min-block-bytes 1024 --max-block-bytes 1024 $(roachprod pgurl tobias-gcassist)roachprod run tobias-gcassist -- sudo systemd-run --collect --same-dir --unit kv-workload ./cockroach workload run kv --read-percent 50 --span-percent 20 --span-limit 1000 --min-block-bytes 1024 --max-block-bytes 1024 --concurrency 128 --max-rate 5000 --tolerate-errors $(roachprod pgurl tobias-gcassist)pup metrics query --query="avg:cockroachdb.sys.gc.assist.ns{cluster:tobias-gcassist}.as_rate()" --from=5mpup metrics query --query="avg:cockroachdb.admission.scheduler_latency_listener.p99_nanos{cluster:tobias-gcassist}" --from=5mpup metrics query --query="avg:cockroachdb.sys.cpu.combined.percent.normalized{cluster:tobias-gcassist}" --from=5m- GC Assist rate (
sys.gc.assist.ns): ~8ms/s - P99 Go scheduling latency (
admission.scheduler_latency_listener.p99_nanos): ~66-89μs — healthy, well under the 1ms threshold. - CPU utilization (
sys.cpu.combined.percent.normalized): ~21-25% on the 16-vCPU machine.
With cons/mark = 0.133, allocations are far from outpacing GC, yet we see ~1.4 MB of assist work (0.81 ms CPU). This appears to be an artifact of the Go runtime credit system, not actual pacing pressure.
The credit mechanism: Every allocation charges debt against the goroutine (gcAssistBytes -= size in malloc.go). When gcAssistBytes goes negative, gcAssistAlloc is called. It first tries to steal credit from bgScanCredit, a global pool that background mark workers flush into every gcCreditSlack = 2000 bytes of scan work. If credit is available, the goroutine takes it and returns — no real assist work. If not, the goroutine does actual scan work with a minimum of gcOverAssistWork = 64 KB (to amortize future allocations).
The key detail: Every goroutine starts each GC cycle with gcAssistBytes = 0 (reset in mgc.go:1785 via forEachG). This means the very first allocation any goroutine makes during a GC cycle immediately puts it into debt and triggers gcAssistAlloc. Whether that results in real work depends on whether bgScanCredit has been flushed yet by background workers. Early in the cycle — before background workers have ramped up — the credit pool is more likely to be empty, forcing goroutines into real assist work.
So the likely source of the ~1.4 MB of assist work is:
- The start of the GC cycle, when many goroutines allocate for the first time with zero credit and background workers have not yet flushed enough credit to steal from.
- Brief windows mid-cycle where
bgScanCreditis transiently empty between background worker flushes (every 2000 bytes of scan work).
Each "missed steal" does at least 64 KB of scanning, pre-paying ~25 KB of allocation credit (at assist ratio ~2.5), which prevents further assists for many subsequent allocations. The result is a small number of short assists rather than sustained pressure.
Source: ~/go-sdk/src/runtime/mgcmark.go (gcAssistAlloc, gcFlushBgCredit), ~/go-sdk/src/runtime/mgc.go:1785 (per-cycle reset), ~/go-sdk/src/runtime/mgcpacer.go (gcCreditSlack, gcOverAssistWork constants).
\