Skip to content

Instantly share code, notes, and snippets.

@danishcake
Created August 19, 2024 11:50
Show Gist options
  • Select an option

  • Save danishcake/75dc5e2e1577563818924a9d27a394ea to your computer and use it in GitHub Desktop.

Select an option

Save danishcake/75dc5e2e1577563818924a9d27a394ea to your computer and use it in GitHub Desktop.
FROM ollama/ollama
# An ollama Dockerfile with a bunch of models built in
# Useful for when you want to work offline
# You can use this with bionic-gpt - when adding the model, the name should match the mode
# e.g. 'llama3.1:8b-instruct-q4_0'
# docker build -t ollama_multiple:latest .
# Add the classic llama3 image
RUN /download.sh 'llama3:8b'
# Add the new llama3.1 images
ADD download.sh /download.sh
RUN chmod +x /download.sh
RUN /download.sh 'llama3.1:8b'
RUN /download.sh 'llama3.1:8b-instruct-q4_0'
RUN /download.sh 'llama3.1:8b-instruct-q8_0'
RUN /download.sh 'llama3:70b-instruct-q2_K'
#!/bin/bash
set -eu
# ollama pull has a nasty habit of failing. It can be resumed, so this script allows you to use
# the retry behaviour when building a Docker image.
# WARNING: It'll retry forever, regardless of the source of the error.
# Typo? Retry forever. No internet? Retry forever.
ollama serve &
sleep 20
while true; do
if ollama pull "$1"; then
echo "Download success"
break
fi
echo "Failed, retrying"
sleep 1
done
kill $!
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment