Skip to content

Instantly share code, notes, and snippets.

@cardboardcode
Last active March 9, 2026 06:09
Show Gist options
  • Select an option

  • Save cardboardcode/5282e7e794b6d7757d4c8c983167f5db to your computer and use it in GitHub Desktop.

Select an option

Save cardboardcode/5282e7e794b6d7757d4c8c983167f5db to your computer and use it in GitHub Desktop.
For People In A Hurry: How to Set Up Open-WebUI with Ollama & Claude Code for Local & Privacy-Focused Coding Agent

What Is This?

This is a quick copy-paste-observe guide for people in a hurry to quickly set up Open-WebUI such that one can run and access a local Large Language Model (LLM) for setting up a local coding agent.

Build

  1. Install Ollama using the command below:
curl -fsSL https://ollama.com/install.sh | sh
  1. Install Open-WebUI via docker using the command below:
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
  1. Access Open-WebUI via its dashboard:

Dashboard Link: http://localhost:3000

  1. Download ollama LLM model by using the command below:
ollama pull <model_name>
#Eg. ollama pull qwen3:8b

Warning

Note that, if you use any models with the cloud tag, it means it would not be fully local as it would be using Ollama's cloud models.

Warning

Note that, only certain LLMs support tooling needed by Claude Code. You can go with the following recommended local models: glm-4.7-flash - 19GB qwen3:8b - 11B

  1. Install Claude Code using the command below:
curl -fsSL https://claude.ai/install.sh | bash

Run

ollama launch claude

References

  1. Claude Code Installation - https://code.claude.com/docs/en/quickstart
  2. Ollama Installation - https://ollama.com/
  3. Open-WebUI Installation - https://github.com/open-webui/open-webui
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment