GPU + uv in Claude Code sandbox (Linux/NVIDIA)
Problem: Claude Code's sandbox (bubblewrap) blocks GPU devices and the uv package cache.
Solution: bwrap wrapper at ~/.local/bin/bwrap
Create ~/.local/bin/bwrap (must be executable, and ~/.local/bin must precede /usr/bin in $PATH):
#!/bin/bash
# GPU-aware bwrap wrapper for Claude Code sandbox.
# Injects GPU device pass-through and read-only uv cache access.
#
# CRITICAL: --dev-bind-try must come AFTER --dev /dev or devtmpfs shadows it.
# See: https://github.com/anthropics/claude-code/issues/13108
set -euo pipefail
REAL_BWRAP=/usr/bin/bwrap
GPU_DEVICES=(
/dev/dri
/dev/kfd
/dev/nvidia0
/dev/nvidiactl
/dev/nvidia-uvm
/dev/nvidia-uvm-tools
/dev/nvidia-modeset
)
# Read-only: cache reads OK, writes blocked to prevent cache poisoning.
EXTRA_READONLY=(
"$HOME/.cache/uv"
)
args=()
for dir in "${EXTRA_READONLY[@]}"; do
[[ -d "$dir" ]] && args+=(--ro-bind "$dir" "$dir")
done
inject_next=false
for arg in "$@"; do
args+=("$arg")
if [[ "$inject_next" == true ]]; then
for dev in "${GPU_DEVICES[@]}"; do
args+=(--dev-bind-try "$dev" "$dev")
done
inject_next=false
fi
[[ "$arg" == "--dev" ]] && inject_next=true
done
exec "$REAL_BWRAP" "${args[@]}"chmod +x ~/.local/bin/bwrap
How it works
Claude Code resolves bwrap via PATH, so ~/.local/bin/bwrap intercepts the call, injects the extra flags, then delegates to /usr/bin/bwrap.
- GPU: --dev-bind-try bind-mounts each device node. Must come after --dev /dev (bwrap mounts a fresh devtmpfs there, which shadows earlier binds).
- uv cache: --ro-bind exposes ~/.cache/uv read-only. uv run with an existing .venv never touches the cache anyway; uv sync will fail inside the sandbox, which is intentional.
- --dev-bind-try silently skips missing devices (safe on non-GPU machines).
Verify
nvidia-smi --query-gpu=name,memory.total --format=csv,noheader uv run python -c "import torch; print(torch.cuda.is_available())"