Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Select an option

  • Save n-connect/9a7975980f36e187175b0d35e7e52ade to your computer and use it in GitHub Desktop.

Select an option

Save n-connect/9a7975980f36e187175b0d35e7e52ade to your computer and use it in GitHub Desktop.
llama.cpp with Vulkan support on macOS

Install dependencies:

brew install libomp vulkan-headers glslang molten-vk shaderc vulkan-loader

Now Molten-VK is already above v1.4 -> therefore no need to get older version and manually apply PR2434.

git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build -DLLAMA_CURL=1 -DGGML_METAL=OFF -DGGML_VULKAN=ON
cmake --build build --config Release

IF you want your older Molten-VK with PR 'patched', compiled and overwrite dylib file original post from ollama's Issue #1016. Test with v1.3

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment