Skip to content

Instantly share code, notes, and snippets.

@noizfactory
Last active November 1, 2025 10:08
Show Gist options
  • Select an option

  • Save noizfactory/b554d89ca9250511b243a82cfe77f0c5 to your computer and use it in GitHub Desktop.

Select an option

Save noizfactory/b554d89ca9250511b243a82cfe77f0c5 to your computer and use it in GitHub Desktop.
Building InstantSfM On Rocky Linux 9

InstantSfM Clean Install (Linux · CUDA 12 · Turing sm_75)

This is a tested, working, clean install guide for InstantSfM on Linux with:

  • GPU: NVIDIA Turing (sm_75, e.g. RTX 2080/2070/Super/Quadro T-series)
  • CUDA Runtime 12.x
  • Python 3.12
  • Conda environment
  • Torch 2.3.1+cu121

It includes all steps required to avoid the OpenSSL, cuDSS, PyTorch, Qt/XCB, Gradio/Pydantic conflicts and errors we encountered.


✅ 1. Clone Repo & Create Conda Environment

# Clone InstantSfM (SSH or HTTPS is fine)
git clone https://github.com/cre185/InstantSfM.git --recursive
cd InstantSfM

# Create environment
conda create -y -n instantsfm python=3.12
conda activate instantsfm

✅ 2. Install Base Dependencies

# Core deps
torch_version="2.3.1+cu121"
pip install torch==2.3.1+cu121 torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121

# Common Image/ML deps
pip install numpy matplotlib tqdm opencv-python kornia scikit-image imageio

# UI dependencies (Gradio, but pin Pydantic/image libs to avoid errors)
pip install gradio==5.44.0 pillow==11.3.0

✅ 3. Fix FastAPI / Pydantic / Typing Extensions Compatibility

pip install --no-cache-dir "typing_extensions>=4.12.2" "pydantic>=2.6,<3" "fastapi>=0.112,<1"
# typing-inspection latest available
pip install --no-cache-dir typing-inspection==0.4.2

✅ 4. Install CUDA Dev Tools (from Conda Forge + NVIDIA)

conda install -y -c conda-forge cuda-toolkit
conda install -y -c nvidia cuda-cccl=12.4.*

Env exports:

export CUDA_HOME="$CONDA_PREFIX"
export CUDACXX="$CONDA_PREFIX/bin/nvcc"
export TORCH_CUDA_ARCH_LIST="7.5"  # Turing

# Core CUDA paths
export CPATH="$CONDA_PREFIX/targets/x86_64-linux/include:$CPATH"
export LIBRARY_PATH="$CONDA_PREFIX/targets/x86_64-linux/lib:$LIBRARY_PATH"
export LD_LIBRARY_PATH="$CONDA_PREFIX/targets/x86_64-linux/lib:$CONDA_PREFIX/lib:$LD_LIBRARY_PATH"

✅ 5. Install cuDSS (Required by BAE + InstantSfM)

Download from: https://developer.nvidia.com/cudss-downloads (Linux · x86_64 · CUDA 12)

Extract (example):

mkdir -p ~/dev
cd ~/dev
# Assume file is libcudss-linux-x86_64-0.7.1.4_cuda12-archive.tar.gz
tar -xvf libcudss-linux-x86_64-0.7.1.4_cuda12-archive.tar.gz

Copy into conda env:

export CUDSS_PATH=~/dev/libcudss-linux-x86_64-0.7.1.4_cuda12-archive
cp -r $CUDSS_PATH/include/* $CONDA_PREFIX/targets/x86_64-linux/include/
cp -r $CUDSS_PATH/lib/* $CONDA_PREFIX/targets/x86_64-linux/lib/

✅ 6. Ensure CUDA Libraries (cublas, cusolver, cusparse) are Available

pip install \
  nvidia-cuda-runtime-cu12==12.1.105 \
  nvidia-cublas-cu12==12.1.3.1 \
  nvidia-cusolver-cu12==11.4.5.107 \
  nvidia-cusparse-cu12==12.1.0.106

# Add to runtime path
export LD_LIBRARY_PATH="$CONDA_PREFIX/lib/python3.12/site-packages/nvidia/cuda_runtime/lib:$LD_LIBRARY_PATH"
export LD_LIBRARY_PATH="$CONDA_PREFIX/lib/python3.12/site-packages/nvidia/cublas/lib:$LD_LIBRARY_PATH"
export LD_LIBRARY_PATH="$CONDA_PREFIX/lib/python3.12/site-packages/nvidia/cusolver/lib:$LD_LIBRARY_PATH"
export LD_LIBRARY_PATH="$CONDA_PREFIX/lib/python3.12/site-packages/nvidia/cusparse/lib:$LD_LIBRARY_PATH"

Test:

python - << 'PY'
import ctypes
for so in ("libcublas.so.12", "libcusolver.so.11", "libcusparse.so.12", "libcudss.so"):
    ctypes.CDLL(so)
print("✅ All CUDA libs load OK")
PY

✅ 7. Install BAE (Bundle Adjustment Engine)

pip install git+ssh://git@github.com/zitongzhan/bae.git || \
pip install git+https://github.com/zitongzhan/bae.git

✅ 8. Install COLMAP (Headless or GUI)

conda install -y -c conda-forge colmap

Fix Qt/XCB headless crash:

export QT_QPA_PLATFORM=offscreen

✅ 9. Install InstantSfM Package

pip install -e .  # from inside InstantSfM folder

Test demo GUI:

python demo.py  # Gradio interface should open

✅ 10. Command-Line Test with Images

IMAGES="/path/to/your_root_dir"
ins-feat --data_path "$IMAGES"

Note that the images themselves need to be in a folder called images in the your_root_dir above

If running headless or via SSH, ensure:

export QT_QPA_PLATFORM=offscreen

✅ 11. Optional: Save These Environment vars Permanently

Append to .bashrc / .bash_profile:

export QT_QPA_PLATFORM=offscreen
export CUDA_HOME="$CONDA_PREFIX"
export CUDACXX="$CONDA_PREFIX/bin/nvcc"
export TORCH_CUDA_ARCH_LIST="7.5"
export LD_LIBRARY_PATH="$CONDA_PREFIX/lib:$CONDA_PREFIX/targets/x86_64-linux/lib:$LD_LIBRARY_PATH"

✅ Final Notes

✔ No more OpenSSL mismatch

✔ cuDSS found and linked

✔ Torch + CUDA 12.1 working

✔ Pydantic/Gradio/FastAPI errors fixed

✔ Qt/XCB crash solved with headless mode


Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment