Skip to content

Instantly share code, notes, and snippets.

@hkfuertes
Last active November 24, 2025 06:10
Show Gist options
  • Select an option

  • Save hkfuertes/deeab4f3f49b28d0842d21af60fe1be3 to your computer and use it in GitHub Desktop.

Select an option

Save hkfuertes/deeab4f3f49b28d0842d21af60fe1be3 to your computer and use it in GitHub Desktop.
Alpine LXC Container with iGPU Ollama Server on Proxmox

How to setup an LXC container with AMD iGPU (Ryzen 7 5800H) passthrougth for Ollama in Proxmox

Proxmox

First we need to install the Alpine LXC, the easiest way is to use Proxmox Helper scripts: https://tteck.github.io/Proxmox/

bash -c "$(wget -qLO - https://github.com/tteck/Proxmox/raw/main/ct/alpine.sh)"

I would install in "advance" mode and be generous with the resources (8-16 cores, 16G ram, 128GB disk)

Once its installed halt the container.
Then append this into the configuration file in Proxmox to passthrough (actually bin-mount) the iGPU:

# /etc/pve/lxc/<LXC_ID>.conf
# From Jellyfin lxc.conf
dev0: /dev/dri/card0,gid=44
dev1: /dev/dri/renderD128,gid=104
dev2: /dev/kfd,gid=104

Alpine LXC

First we need to setup docker:

apk update
apk add docker docker-compose
rc-update add docker
service docker start

Then just create a docker-compose.yml file with this content:

services:
  ollama:
    user: root
    container_name: ollama
    image: ollama/ollama:rocm
    healthcheck:
      test: ollama --version || exit 1
      interval: 10s
    ports:
      - "11434:11434"
    restart: unless-stopped
    devices:
      - /dev/dri/renderD128:/dev/dri/renderD128
      - /dev/kfd:/dev/kfd
    volumes:
      - ./config:/root/.ollama
    environment:
      - HSA_OVERRIDE_GFX_VERSION='9.0.0'

  owui:
    ports:
      - 80:8080
    extra_hosts:
      - host.docker.internal:host-gateway
    volumes:
      - open-webui:/app/backend/data
    depends_on:
      - ollama
    container_name: open-webui
    restart: unless-stopped
    image: ghcr.io/open-webui/open-webui:main

volumes:
  open-webui:

To run it docker compose up -d.

You can now access ollama (to install new models throught CLI) with this command:

docker exec -it ollama /bin/ollama
# Alternatively you can create an alias for it.
alias ollama="docker exec -it ollama /bin/ollama"
@famewolf
Copy link

famewolf commented Nov 24, 2025

Even using the proxmox helper ollama script, sharing the devices and adding the environment variable for my radeon 680m igpu while using a tutorial on adding rocm to the container it failed on the last amdgpu command telling me my amdgpu dkms modules did not match the kernel which is proxmox's 6.17 kernel from debian 13. All the recommended fixes involve downgrading your kernel to 6.8 which isn't an option on proxmox so then you are back to vm's and only being able to passthough the gpu to one vm. I want to run Jellyfin AND ollama with both being able to access the gpu on a ryzen mini pc using rocm with a radeon 680m igpu as I mentioned.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment