This is a quick copy-paste-observe guide for people in a hurry to quickly set up Open-WebUI such that one can run and access a local Large Language Model (LLM) for setting up a local coding agent.
- Install Ollama using the command below:
curl -fsSL https://ollama.com/install.sh | sh- Install Open-WebUI via docker using the command below:
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main- Access Open-WebUI via its dashboard:
Dashboard Link: http://localhost:3000
- Download ollama LLM model by using the command below:
ollama pull <model_name>
#Eg. ollama pull qwen3:8bWarning
Note that, if you use any models with the cloud tag, it means it would not be fully local as it would be using Ollama's cloud models.
Warning
Note that, only certain LLMs support tooling needed by Claude Code. You can go with the following recommended local models: glm-4.7-flash - 19GB qwen3:8b - 11B
- Install Claude Code using the command below:
curl -fsSL https://claude.ai/install.sh | bashollama launch claude- Claude Code Installation - https://code.claude.com/docs/en/quickstart
- Ollama Installation - https://ollama.com/
- Open-WebUI Installation - https://github.com/open-webui/open-webui