Complete guide to set up OpenClaw AI assistant in an isolated VM with local LLM (no cloud APIs).
- Prerequisites
- VM Setup
- LM Studio Setup (Host)
- OpenClaw Installation (VM)
- Configuration
- Testing
- Troubleshooting
- Host Machine: 16GB+ RAM, modern CPU with virtualization support
- GPU: NVIDIA GPU recommended for faster LLM inference (optional but recommended)
- Disk Space: 30GB+ free (20GB for LLM models, 10GB for VM)
- Ubuntu 24.04 (or similar Linux distribution)
- Internet connection for initial setup
sudo apt update
sudo apt install -y qemu-kvm libvirt-daemon-system libvirt-clients bridge-utils virt-managerWhat this does: Installs QEMU/KVM virtualization and virt-manager GUI for VM management.
cd ~/Downloads
wget https://releases.ubuntu.com/24.04.1/ubuntu-24.04.1-live-server-amd64.isoWhat this does: Downloads Ubuntu Server 24.04.1 ISO for the VM installation.
sudo mv ~/Downloads/ubuntu-24.04.1-live-server-amd64.iso /var/lib/libvirt/images/
sudo chown libvirt-qemu:kvm /var/lib/libvirt/images/ubuntu-24.04.1-live-server-amd64.isoWhat this does: Moves ISO to libvirt's standard location with correct permissions.
- Open virt-manager:
virt-manager - Click "Create a new virtual machine"
- Select "Local install media" β Browse to
/var/lib/libvirt/images/ubuntu-24.04.1-live-server-amd64.iso - Configure resources:
- Memory: 4096 MB (4GB)
- CPUs: 2
- Disk: 20 GB
- Network: NAT (default network)
- Name:
openclaw-vm(or any name you prefer) - Complete the Ubuntu Server installation:
- Create user:
<USERNAME>(choose your preferred username) - Install OpenSSH server when prompted
- No additional packages needed
- Create user:
After VM boots, login and run:
ip addr show | grep "inet 192.168.122"Expected output: Something like inet 192.168.122.XXX/24
Note this IP - you'll use it for SSH access. We'll refer to this as <VM_IP> throughout the guide.
SSH into the VM from your host:
ssh <USERNAME>@<VM_IP> # Replace with your username and VM IPInside the VM:
# Install and enable firewall
sudo apt update
sudo apt install -y ufw
sudo ufw default deny incoming
sudo ufw default allow outgoing
# Allow SSH from host network only (for security)
sudo ufw allow from 192.168.122.0/24 to any port 22
# Enable firewall
sudo ufw enable
sudo ufw statusWhat this does: Sets up a firewall that blocks all incoming connections except SSH from the host network.
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
# Add your user to docker group
sudo usermod -aG docker $USER
# Logout and login again for group to take effect
exitWhat this does: Installs Docker and Docker Compose, which OpenClaw uses.
# Download LM Studio (version 0.4.2-2 or later)
cd ~/Downloads
wget https://releases.lmstudio.ai/linux/0.4.2/LM-Studio-0.4.2-2-amd64.AppImage
# Make it executable and move to local bin
chmod +x LM-Studio-0.4.2-2-amd64.AppImage
mkdir -p ~/.local/bin
mv LM-Studio-0.4.2-2-amd64.AppImage ~/.local/bin/lm-studio
# Launch LM Studio
~/.local/bin/lm-studioWhat this does: Installs LM Studio AppImage which will run the local LLM.
- In LM Studio, go to Search (π)
- Search for:
qwen2.5-coder-14b-instruct - Download: Qwen2.5-Coder-14B-Instruct Q4_K_M (~9GB)
- Once downloaded, click Load Model in the left sidebar
- Select the Qwen2.5-Coder-14B-Instruct Q4_K_M model
What this does: Downloads and loads a 14B parameter coding-focused LLM that will power OpenClaw.
- In LM Studio, click Developer β Local Server
- Click Configure
- Set:
- Network:
0.0.0.0(listen on all interfaces) - Port:
1234(default)
- Network:
- Click Start Server
What this does: Starts the LM Studio API server so OpenClaw can connect to it.
On your host machine:
# Allow VM network to access LM Studio port
sudo ufw allow from 192.168.122.0/24 to any port 1234 comment 'LM Studio for OpenClaw VM'What this does: Opens firewall on host to allow VM to connect to LM Studio.
SSH into VM and test:
ssh <USERNAME>@<VM_IP> # Your VM IP
# Test connection to LM Studio
curl http://192.168.122.1:1234/v1/modelsExpected output: JSON response with model information (should include "qwen2.5-coder-14b").
All commands below run inside the VM (SSH session).
cd ~
git clone https://github.com/openclaw/openclaw.git
cd openclawWhat this does: Downloads the OpenClaw source code.
mkdir -p ~/.openclaw/{credentials,canvas,cron}
mkdir -p ~/openclaw-workspaceWhat this does: Creates necessary directories for OpenClaw configuration and workspace.
Create ~/.openclaw/openclaw.json:
cat > ~/.openclaw/openclaw.json << 'EOF'
{
agents: {
defaults: {
workspace: "~/workspace",
model: { primary: "lm-studio/qwen2.5-coder-14b" },
models: {
"lm-studio/qwen2.5-coder-14b": {
alias: "qwen"
}
},
sandbox: {
mode: "off"
}
}
},
models: {
mode: "merge",
providers: {
"lm-studio": {
baseUrl: "http://192.168.122.1:1234/v1",
apiKey: "not-needed",
api: "openai-completions",
models: [
{
id: "qwen2.5-coder-14b",
maxTokens: 32768,
contextWindow: 32768
}
]
}
}
},
gateway: {
mode: "local",
port: <CUSTOM_PORT>
}
}
EOFNote: Replace <CUSTOM_PORT> with your chosen port number. Using a non-standard port adds a layer of security through obscurity.
What this does: Configures OpenClaw to:
- Use LM Studio on the host at
192.168.122.1:1234 - Run on custom port (instead of default 18789)
- Use sandbox mode "off" (simpler, suitable for isolated VM)
- Set workspace to
~/workspace(matches Docker volume mount)
Create ~/openclaw/docker-compose.override.yml:
cat > ~/openclaw/docker-compose.override.yml << 'EOF'
services:
openclaw-gateway:
network_mode: host
environment:
HOME: /home/openclaw
OPENCLAW_GATEWAY_PORT: <CUSTOM_PORT>
env_file:
- ${HOME}/.openclaw/credentials/.env
command:
[
"node",
"dist/index.js",
"gateway",
"--bind",
"lan",
"--port",
"<CUSTOM_PORT>",
]
volumes:
- ${HOME}/.openclaw:/home/openclaw/.openclaw
- ${HOME}/openclaw-workspace:/home/openclaw/workspace
- /var/run/docker.sock:/var/run/docker.sock
security_opt:
- no-new-privileges:true
EOFNote: Replace <CUSTOM_PORT> with the same port number you chose in Step 15.
What this does: Configures Docker Compose to:
- Use host networking (so container can reach host LM Studio)
- Run on your custom port
- Mount config and workspace directories
- Apply security hardening with
no-new-privileges
touch ~/.openclaw/credentials/.env
chmod 600 ~/.openclaw/credentials/.envWhat this does: Creates credentials file (empty since we're using local LLM, no API keys needed).
cd ~/openclaw
docker compose up -dWhat this does: Builds and starts the OpenClaw gateway container in the background.
# Check logs
docker compose logs openclaw-gateway | tail -n 20
# Should see:
# [gateway] agent model: lm-studio/qwen2.5-coder-14b
# [gateway] listening on ws://0.0.0.0:<CUSTOM_PORT>Expected output: Gateway started successfully, listening on your custom port.
docker exec -it openclaw-openclaw-gateway-1 node dist/index.js tuiWhat this does: Opens an interactive terminal UI where you can chat with the AI.
Expected behavior:
- You'll see a chat interface
- Type a message like "Hello! Can you introduce yourself?"
- Press Enter
- You should get a response from the Qwen model running in LM Studio
To exit: Press Ctrl+C
docker exec openclaw-openclaw-gateway-1 node dist/index.js agent --message "What is 2+2?" --localWhat this does: Sends a single message to the AI and gets a response without opening the full UI.
Symptom: Bind for :::<CUSTOM_PORT> failed: port is already allocated
Solution:
# Remove old containers
docker rm -f $(docker ps -aq --filter "name=openclaw")
# Start fresh
docker compose up -dIf that doesn't work, reboot the VM:
sudo rebootThen after reboot:
cd ~/openclaw
docker compose up -dCheck: Make sure your docker-compose.override.yml uses network_mode: host (not ports: mapping with bridge network).
Check: Verify sandbox: { mode: "off" } in ~/.openclaw/openclaw.json.
Check LM Studio on host:
# On host machine
curl http://192.168.122.1:1234/v1/modelsShould return JSON with model info.
Check connectivity from container:
# In VM
docker exec openclaw-openclaw-gateway-1 curl -s http://192.168.122.1:1234/v1/modelsShould also return JSON.
Verify LM Studio is listening on all interfaces:
# On host
ss -tlnp | grep 1234Should show 0.0.0.0:1234, NOT 127.0.0.1:1234.
If it shows 127.0.0.1:1234, reconfigure LM Studio to listen on 0.0.0.0.
Symptom: EACCES: permission denied, mkdir '/home/openclaw/openclaw-workspace'
Solution: The workspace path in config doesn't match the Docker volume mount.
Verify workspace setting:
grep workspace ~/.openclaw/openclaw.jsonShould show: workspace: "~/workspace"
NOT: workspace: "~/openclaw-workspace"
Fix if needed:
sed -i 's|workspace: "~/openclaw-workspace"|workspace: "~/workspace"|' ~/.openclaw/openclaw.json
docker compose restart openclaw-gateway# Start
cd ~/openclaw && docker compose up -d
# Stop
cd ~/openclaw && docker compose stop
# Restart
cd ~/openclaw && docker compose restart openclaw-gateway
# View logs
docker compose logs openclaw-gateway -f# List running containers
docker ps
# List all containers (including stopped)
docker ps -a
# Inspect container details
docker inspect openclaw-openclaw-gateway-1docker exec -it openclaw-openclaw-gateway-1 bashcd ~/openclaw
git pull
docker compose down
docker compose build
docker compose up -dβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Host Machine (Ubuntu 24.04) β
β β
β ββββββββββββββββββββββββ β
β β LM Studio β β
β β Port: 1234 β β
β β Model: Qwen2.5-14B β β
β ββββββββββββββββββββββββ β
β β² β
β β HTTP API (192.168.122.1:1234) β
β β β
β βββββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββββ
β β VM (Ubuntu Server 24.04) ββ
β β IP: 192.168.122.XXX ββ
β β ββ
β β ββββββββββββββββββββββββββββββββββββββββββββββββββββ ββ
β β β Docker Container β ββ
β β β (network_mode: host) β ββ
β β β β ββ
β β β ββββββββββββββββββββββββββββββββββββββββββββββββ ββ
β β β β OpenClaw Gateway ββ ββ
β β β β Port: <CUSTOM_PORT> ββ ββ
β β β β WebSocket Server ββ ββ
β β β ββββββββββββββββββββββββββββββββββββββββββββββββ ββ
β β ββββββββββββββββββββββββββββββββββββββββββββββββββββ ββ
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Network Flow:
- You connect to OpenClaw TUI via
docker exec - OpenClaw Gateway (in container) connects to LM Studio on host via
192.168.122.1:1234 - LM Studio runs Qwen model and returns responses
- OpenClaw displays responses in TUI
Security Layers:
- β Isolated VM with firewall (only SSH allowed from host)
- β No cloud APIs (all data stays local)
- β Docker container isolation
- β Custom non-standard port
- β Host firewall restricts LM Studio access to VM network only
OpenClaw can integrate with:
- Telegram
- Discord
- Slack
- Signal
- iMessage
See OpenClaw docs for channel configuration.
For better security when running untrusted code, you can enable Docker sandbox mode:
- Install Docker CLI in the container (requires custom Dockerfile)
- Change
sandbox: { mode: "off" }tosandbox: { mode: "all" }in config - Restart gateway
This is more complex but provides stronger isolation.
- OpenClaw GitHub: https://github.com/openclaw/openclaw
- OpenClaw Docs: https://docs.openclaw.ai
- LM Studio: https://lmstudio.ai
- Qwen Models: https://huggingface.co/Qwen
Setup guide created based on practical installation experience with:
- Ubuntu 24.04 (host + VM)
- QEMU/KVM virtualization
- LM Studio 0.4.2-2
- OpenClaw 2026.2.6-3
- Qwen2.5-Coder-14B-Instruct Q4_K_M
Last Updated: February 8, 2026