Skip to content

Instantly share code, notes, and snippets.

@spamshaker
Last active February 22, 2026 07:05
Show Gist options
  • Select an option

  • Save spamshaker/c6201d391920032519fb5180bbf0804f to your computer and use it in GitHub Desktop.

Select an option

Save spamshaker/c6201d391920032519fb5180bbf0804f to your computer and use it in GitHub Desktop.
DGX Spark llama.cpp build/install
#!/usr/bin/env bash
set -euo pipefail
# =============================================================================
# Background
# =============================================================================
# Ubuntu 24 ships with the system compiler GCC 13/14, which only provides C++14
# support. C++14 lacks the `NVIDIA_GB10=1` flag that is required for the
# DGX Spark architecture (GB10 and ARM Cortex‑A series). Without this flag,
# attempts to compile the DGX Spark codebase result in errors or warnings
# related to missing GPU‑generation support.
#
# To obtain a clean, error‑free build we must:
# 1. Install GCC 15 (which introduces full C++15 support and the
# `NVIDIA_GB10=1` configuration needed for DGX Spark).
# 2. Use CUDA ≥ 13, as earlier CUDA releases do not provide the necessary
# intrinsics for the GB10 GPU generation.
#
# The script below automates the installation of GCC 15, configures the
# appropriate alternatives, and builds the LLaMA.cpp project with CUDA support.
# =============================================================================
# Sources:
# - Ubuntu GCC availability: https://documentation.ubuntu.com/ubuntu-for-developers/reference/availability/gcc/
# - Box64 compile docs for DGX Spark GB10: https://github.com/ptitSeb/box64/blob/main/docs/COMPILE.md#for-dgx-sparkgb10-based-devices
# - GCC Compilation guide: https://medium.com/@xersendo/moving-to-c-26-how-to-build-and-set-up-gcc-15-1-on-ubuntu-f52cc9173fa0
# =============================================================================
export CONFIG_SHELL=/bin/bash
trap 'echo "Error on line $LINENO" >&2' ERR
export CMAKE_CUDA_COMPILER=/usr/local/cuda/bin/nvcc
sudo echo "Sudo enabled"
# Update system packages and ensure we have a recent gcc/g++ for version checks
sudo apt update && sudo apt upgrade -y
sudo g++ --version && g++ --version
# Install build dependencies required for GCC and LLaMA.cpp
sudo apt install -y \
build-essential git make gawk \
flex bison libgmp-dev libmpfr-dev libmpc-dev \
python3 binutils perl libisl-dev libzstd-dev \
tar gzip bzip2 libssl-dev
# -------------------------------------------------------------------------
# Build and install GCC
# 15 (C++15)
# -------------------------------------------------------------------------
# Create the directory if it does not exist
mkdir -p "$HOME/gcc-15"
cd "$HOME/gcc-15" || exit
# Clone the GCC source if it hasn't been cloned yet
if [ ! -d "gcc-15-source" ]; then
git clone https://gcc.gnu.org/git/gcc.git gcc-15-source || /dev/null
fi
cd gcc-15-source || exit
git checkout releases/gcc-15.2.0
./contrib/download_prerequisites
cd "$HOME/gcc-15" || exit
mkdir -p gcc-15-build
cd gcc-15-build || exit
../gcc-15-source/configure --prefix=/opt/gcc-15 --disable-multilib --enable-languages=c,c++
make -j"$(nproc)"
sudo make install
# Register GCC 15 as the default compiler (priority 100)
sudo update-alternatives --install /usr/bin/g++ g++ /opt/gcc-15/bin/g++ 100
sudo update-alternatives --install /usr/bin/gcc gcc /opt/gcc-15/bin/gcc 100
# -------------------------------------------------------------------------
# Build LLaMA.cpp with CUDA support
# -------------------------------------------------------------------------
git clone https://github.com/ggml-org/llama.cpp "$HOME/ggml-org/llama.cpp"
cd "$HOME/ggml-org/llama.cpp" || exit
cmake -B build-cuda -DGGML_CUDA=ON
cmake --build build-cuda -j"$(nproc)"
# -------------------------------------------------------------------------
# Post‑install usage
# -------------------------------------------------------------------------
echo "Add LLaMA binaries to your PATH:"
echo 'export PATH="$HOME/ggml-org/llama.cpp/build-cuda/bin:$PATH"' >> "$HOME/.bashrc"
echo "Run 'source \"$HOME/.bashrc\"' to reload your PATH."
echo "Start the LLaMA server with:"
echo "llama-server --gpt-oss-120b-default --host 0.0.0.0 --port 8023"
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment