Skip to content

Instantly share code, notes, and snippets.

@geofft
Last active July 21, 2025 13:13
Show Gist options
  • Select an option

  • Save geofft/61add70cc0c32043560a2799518cfa12 to your computer and use it in GitHub Desktop.

Select an option

Save geofft/61add70cc0c32043560a2799518cfa12 to your computer and use it in GitHub Desktop.
PyTorch does not depend on container-wide NVIDIA libraries
FROM python:3.12-slim-bookworm
COPY --from=ghcr.io/astral-sh/uv:latest /uv /uvx /bin/
RUN uv venv /v
WORKDIR /v
RUN env UV_TORCH_BACKEND=auto uv pip install torch numpy
CMD /bin/bash
FROM test
COPY cuda-compat-12-9_575.57.08-1_amd64.deb /tmp
RUN dpkg -i /tmp/cuda-compat-12-9_575.57.08-1_amd64.deb
RUN rm /tmp/cuda-compat-12-9_575.57.08-1_amd64.deb
FROM test
COPY cuda-compat-12-9_575.57.08-1_amd64.deb /tmp
RUN dpkg -x /tmp/cuda-compat-12-9_575.57.08-1_amd64.deb /tmp/compat
RUN cp -a /tmp/compat/usr/local/cuda-12.9/compat/* /v/lib/python3.12/site-packages/nvidia/cuda_runtime/lib
RUN rm /tmp/cuda-compat-12-9_575.57.08-1_amd64.deb
#!/bin/bash
set -euo pipefail
compat=cuda-compat-12-9_575.57.08-1_amd64.deb
if ! [ -f "$compat" ]; then
wget https://developer.download.nvidia.com/compute/cuda/repos/debian12/x86_64/"$compat"
fi
docker_run () {
docker run --device /dev/nvidia0 --device /dev/nvidiactl --device /dev/nvidia-uvm --rm -it "$@"
}
test_command=(/v/bin/python -c 'import torch; print(f"{torch.cuda.is_available()=}"); import ctypes; print(ctypes.CDLL("libnvidia-ml.so.1"))')
set -x
docker build --tag uv-cuda-demo-base .
for i in global venv; do
docker build --tag uv-cuda-demo-"$i" -f Dockerfile.compat-"$i" .
done
set +e
docker_run uv-cuda-demo-base ldd /bin/uv
docker_run uv-cuda-demo-base "${test_command[@]}"
docker_run uv-cuda-demo-global env LD_LIBRARY_PATH=/usr/local/cuda-12.9/compat "${test_command[@]}"
docker_run uv-cuda-demo-venv "${test_command[@]}"
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment