git clone git@github.com:ollama/ollama.git
cd ollama
make -f Makefile.sync checkout
make -f Makefile.sync synccmake -B build -G Ninja -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang++ --preset "ROCm 6" -DCMAKE_INSTALL_PREFIX=/usr/local -DAMDGPU_TARGETS="gfx1201"
NOTE: use
/usr/localbecause the linux ollama installer will install there
cmake --build build --config ReleaseNOTE: I am manually overriding the DAMDGPU_TARGETS to only build my gpu (gfx1201) (which is a way faster build process)
NOTE: You do not need to specify DAMDGPU_TARGETS if you specified the preset in the first cmake step it will include all rocm 6 gpus.
cd build
sudo make installNote: Because you used the
/usr/localinstall prefix this will install the libraries to the correct location/usr/local/lib/ollama/*since ollama looks for the libs relatively (on linux) at../lib/ollamafrom the executable location of/usr/local/bin/ollama.
cd ..
go build .
sudo systemctl stop ollama
sudo cp ./ollama /usr/local/bin/ollama
sudo systemctl start ollama
sudo systemctl status ollama
...
msg="amdgpu is supported" gpu=GPU-********* gpu_type=gfx1201
...