Skip to content

Instantly share code, notes, and snippets.

@adamwdennis
Created April 23, 2024 12:37
Show Gist options
  • Select an option

  • Save adamwdennis/3421771ac89caba8f0db7f3b33575619 to your computer and use it in GitHub Desktop.

Select an option

Save adamwdennis/3421771ac89caba8f0db7f3b33575619 to your computer and use it in GitHub Desktop.
Running Llama3 locally...
# Make sure you have the following installed:
# 1. homebrew for MacOS
# 2. docker desktop for MacOS
# Run the following commands, in order:
brew install ollama;
ollama serve &;
ollama pull llama3;
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
# Go to http://localhost:3000
# Sign up (it's a local-only account, don't worry)
# At the top, choose `llama3` as your model
# Use this like you would ChatGPT
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment