Skip to content

Instantly share code, notes, and snippets.

@nirajpandkar
Created February 9, 2026 02:15
Show Gist options
  • Select an option

  • Save nirajpandkar/3e5438f0479326a04c130bfe59e0a02a to your computer and use it in GitHub Desktop.

Select an option

Save nirajpandkar/3e5438f0479326a04c130bfe59e0a02a to your computer and use it in GitHub Desktop.

OpenClaw Secure VM Installation Guide

Complete guide to set up OpenClaw AI assistant in an isolated VM with local LLM (no cloud APIs).

Table of Contents

  1. Prerequisites
  2. VM Setup
  3. LM Studio Setup (Host)
  4. OpenClaw Installation (VM)
  5. Configuration
  6. Testing
  7. Troubleshooting

Prerequisites

Hardware Requirements

  • Host Machine: 16GB+ RAM, modern CPU with virtualization support
  • GPU: NVIDIA GPU recommended for faster LLM inference (optional but recommended)
  • Disk Space: 30GB+ free (20GB for LLM models, 10GB for VM)

Software Requirements (Host)

  • Ubuntu 24.04 (or similar Linux distribution)
  • Internet connection for initial setup

VM Setup

Step 1: Install Virtualization Software

sudo apt update
sudo apt install -y qemu-kvm libvirt-daemon-system libvirt-clients bridge-utils virt-manager

What this does: Installs QEMU/KVM virtualization and virt-manager GUI for VM management.

Step 2: Download Ubuntu Server ISO

cd ~/Downloads
wget https://releases.ubuntu.com/24.04.1/ubuntu-24.04.1-live-server-amd64.iso

What this does: Downloads Ubuntu Server 24.04.1 ISO for the VM installation.

Step 3: Move ISO to Proper Location

sudo mv ~/Downloads/ubuntu-24.04.1-live-server-amd64.iso /var/lib/libvirt/images/
sudo chown libvirt-qemu:kvm /var/lib/libvirt/images/ubuntu-24.04.1-live-server-amd64.iso

What this does: Moves ISO to libvirt's standard location with correct permissions.

Step 4: Create VM Using virt-manager

  1. Open virt-manager: virt-manager
  2. Click "Create a new virtual machine"
  3. Select "Local install media" β†’ Browse to /var/lib/libvirt/images/ubuntu-24.04.1-live-server-amd64.iso
  4. Configure resources:
    • Memory: 4096 MB (4GB)
    • CPUs: 2
    • Disk: 20 GB
  5. Network: NAT (default network)
  6. Name: openclaw-vm (or any name you prefer)
  7. Complete the Ubuntu Server installation:
    • Create user: <USERNAME> (choose your preferred username)
    • Install OpenSSH server when prompted
    • No additional packages needed

Step 5: Find VM IP Address

After VM boots, login and run:

ip addr show | grep "inet 192.168.122"

Expected output: Something like inet 192.168.122.XXX/24

Note this IP - you'll use it for SSH access. We'll refer to this as <VM_IP> throughout the guide.

Step 6: Configure VM Firewall

SSH into the VM from your host:

ssh <USERNAME>@<VM_IP>  # Replace with your username and VM IP

Inside the VM:

# Install and enable firewall
sudo apt update
sudo apt install -y ufw
sudo ufw default deny incoming
sudo ufw default allow outgoing

# Allow SSH from host network only (for security)
sudo ufw allow from 192.168.122.0/24 to any port 22

# Enable firewall
sudo ufw enable
sudo ufw status

What this does: Sets up a firewall that blocks all incoming connections except SSH from the host network.

Step 7: Install Docker in VM

curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh

# Add your user to docker group
sudo usermod -aG docker $USER

# Logout and login again for group to take effect
exit

What this does: Installs Docker and Docker Compose, which OpenClaw uses.


LM Studio Setup (Host)

Step 8: Install LM Studio on Host Machine

# Download LM Studio (version 0.4.2-2 or later)
cd ~/Downloads
wget https://releases.lmstudio.ai/linux/0.4.2/LM-Studio-0.4.2-2-amd64.AppImage

# Make it executable and move to local bin
chmod +x LM-Studio-0.4.2-2-amd64.AppImage
mkdir -p ~/.local/bin
mv LM-Studio-0.4.2-2-amd64.AppImage ~/.local/bin/lm-studio

# Launch LM Studio
~/.local/bin/lm-studio

What this does: Installs LM Studio AppImage which will run the local LLM.

Step 9: Download and Load Qwen Model in LM Studio

  1. In LM Studio, go to Search (πŸ”)
  2. Search for: qwen2.5-coder-14b-instruct
  3. Download: Qwen2.5-Coder-14B-Instruct Q4_K_M (~9GB)
  4. Once downloaded, click Load Model in the left sidebar
  5. Select the Qwen2.5-Coder-14B-Instruct Q4_K_M model

What this does: Downloads and loads a 14B parameter coding-focused LLM that will power OpenClaw.

Step 10: Configure LM Studio Server

  1. In LM Studio, click Developer β†’ Local Server
  2. Click Configure
  3. Set:
    • Network: 0.0.0.0 (listen on all interfaces)
    • Port: 1234 (default)
  4. Click Start Server

What this does: Starts the LM Studio API server so OpenClaw can connect to it.

Step 11: Allow VM to Access LM Studio

On your host machine:

# Allow VM network to access LM Studio port
sudo ufw allow from 192.168.122.0/24 to any port 1234 comment 'LM Studio for OpenClaw VM'

What this does: Opens firewall on host to allow VM to connect to LM Studio.

Step 12: Test Connectivity from VM

SSH into VM and test:

ssh <USERNAME>@<VM_IP>  # Your VM IP

# Test connection to LM Studio
curl http://192.168.122.1:1234/v1/models

Expected output: JSON response with model information (should include "qwen2.5-coder-14b").


OpenClaw Installation (VM)

All commands below run inside the VM (SSH session).

Step 13: Clone OpenClaw Repository

cd ~
git clone https://github.com/openclaw/openclaw.git
cd openclaw

What this does: Downloads the OpenClaw source code.

Step 14: Create Directory Structure

mkdir -p ~/.openclaw/{credentials,canvas,cron}
mkdir -p ~/openclaw-workspace

What this does: Creates necessary directories for OpenClaw configuration and workspace.


Configuration

Step 15: Create OpenClaw Configuration

Create ~/.openclaw/openclaw.json:

cat > ~/.openclaw/openclaw.json << 'EOF'
{
  agents: {
    defaults: {
      workspace: "~/workspace",
      model: { primary: "lm-studio/qwen2.5-coder-14b" },
      models: {
        "lm-studio/qwen2.5-coder-14b": {
          alias: "qwen"
        }
      },
      sandbox: {
        mode: "off"
      }
    }
  },
  models: {
    mode: "merge",
    providers: {
      "lm-studio": {
        baseUrl: "http://192.168.122.1:1234/v1",
        apiKey: "not-needed",
        api: "openai-completions",
        models: [
          {
            id: "qwen2.5-coder-14b",
            maxTokens: 32768,
            contextWindow: 32768
          }
        ]
      }
    }
  },
  gateway: {
    mode: "local",
    port: <CUSTOM_PORT>
  }
}
EOF

Note: Replace <CUSTOM_PORT> with your chosen port number. Using a non-standard port adds a layer of security through obscurity.

What this does: Configures OpenClaw to:

  • Use LM Studio on the host at 192.168.122.1:1234
  • Run on custom port (instead of default 18789)
  • Use sandbox mode "off" (simpler, suitable for isolated VM)
  • Set workspace to ~/workspace (matches Docker volume mount)

Step 16: Create Docker Compose Override

Create ~/openclaw/docker-compose.override.yml:

cat > ~/openclaw/docker-compose.override.yml << 'EOF'
services:
  openclaw-gateway:
    network_mode: host
    environment:
      HOME: /home/openclaw
      OPENCLAW_GATEWAY_PORT: <CUSTOM_PORT>
    env_file:
      - ${HOME}/.openclaw/credentials/.env
    command:
      [
        "node",
        "dist/index.js",
        "gateway",
        "--bind",
        "lan",
        "--port",
        "<CUSTOM_PORT>",
      ]
    volumes:
      - ${HOME}/.openclaw:/home/openclaw/.openclaw
      - ${HOME}/openclaw-workspace:/home/openclaw/workspace
      - /var/run/docker.sock:/var/run/docker.sock
    security_opt:
      - no-new-privileges:true
EOF

Note: Replace <CUSTOM_PORT> with the same port number you chose in Step 15.

What this does: Configures Docker Compose to:

  • Use host networking (so container can reach host LM Studio)
  • Run on your custom port
  • Mount config and workspace directories
  • Apply security hardening with no-new-privileges

Step 17: Create Empty Credentials File

touch ~/.openclaw/credentials/.env
chmod 600 ~/.openclaw/credentials/.env

What this does: Creates credentials file (empty since we're using local LLM, no API keys needed).

Step 18: Start OpenClaw Gateway

cd ~/openclaw
docker compose up -d

What this does: Builds and starts the OpenClaw gateway container in the background.

Step 19: Verify Gateway is Running

# Check logs
docker compose logs openclaw-gateway | tail -n 20

# Should see:
# [gateway] agent model: lm-studio/qwen2.5-coder-14b
# [gateway] listening on ws://0.0.0.0:<CUSTOM_PORT>

Expected output: Gateway started successfully, listening on your custom port.


Testing

Step 20: Test the Terminal UI

docker exec -it openclaw-openclaw-gateway-1 node dist/index.js tui

What this does: Opens an interactive terminal UI where you can chat with the AI.

Expected behavior:

  • You'll see a chat interface
  • Type a message like "Hello! Can you introduce yourself?"
  • Press Enter
  • You should get a response from the Qwen model running in LM Studio

To exit: Press Ctrl+C

Step 21: Test One-off Commands

docker exec openclaw-openclaw-gateway-1 node dist/index.js agent --message "What is 2+2?" --local

What this does: Sends a single message to the AI and gets a response without opening the full UI.


Troubleshooting

Port Already Allocated Error

Symptom: Bind for :::<CUSTOM_PORT> failed: port is already allocated

Solution:

# Remove old containers
docker rm -f $(docker ps -aq --filter "name=openclaw")

# Start fresh
docker compose up -d

If that doesn't work, reboot the VM:

sudo reboot

Then after reboot:

cd ~/openclaw
docker compose up -d

Gateway Keeps Restarting / EACCES Errors

Check: Make sure your docker-compose.override.yml uses network_mode: host (not ports: mapping with bridge network).

Check: Verify sandbox: { mode: "off" } in ~/.openclaw/openclaw.json.

No Response from AI (TUI shows "no output")

Check LM Studio on host:

# On host machine
curl http://192.168.122.1:1234/v1/models

Should return JSON with model info.

Check connectivity from container:

# In VM
docker exec openclaw-openclaw-gateway-1 curl -s http://192.168.122.1:1234/v1/models

Should also return JSON.

Verify LM Studio is listening on all interfaces:

# On host
ss -tlnp | grep 1234

Should show 0.0.0.0:1234, NOT 127.0.0.1:1234.

If it shows 127.0.0.1:1234, reconfigure LM Studio to listen on 0.0.0.0.

Permission Denied for Workspace

Symptom: EACCES: permission denied, mkdir '/home/openclaw/openclaw-workspace'

Solution: The workspace path in config doesn't match the Docker volume mount.

Verify workspace setting:

grep workspace ~/.openclaw/openclaw.json

Should show: workspace: "~/workspace"

NOT: workspace: "~/openclaw-workspace"

Fix if needed:

sed -i 's|workspace: "~/openclaw-workspace"|workspace: "~/workspace"|' ~/.openclaw/openclaw.json
docker compose restart openclaw-gateway

Useful Commands

Start/Stop Gateway

# Start
cd ~/openclaw && docker compose up -d

# Stop
cd ~/openclaw && docker compose stop

# Restart
cd ~/openclaw && docker compose restart openclaw-gateway

# View logs
docker compose logs openclaw-gateway -f

Check Container Status

# List running containers
docker ps

# List all containers (including stopped)
docker ps -a

# Inspect container details
docker inspect openclaw-openclaw-gateway-1

Access Gateway Shell

docker exec -it openclaw-openclaw-gateway-1 bash

Update OpenClaw

cd ~/openclaw
git pull
docker compose down
docker compose build
docker compose up -d

Architecture Summary

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Host Machine (Ubuntu 24.04)                                 β”‚
β”‚                                                               β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”                                   β”‚
β”‚  β”‚ LM Studio            β”‚                                   β”‚
β”‚  β”‚ Port: 1234           β”‚                                   β”‚
β”‚  β”‚ Model: Qwen2.5-14B   β”‚                                   β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                                   β”‚
β”‚            β–²                                                  β”‚
β”‚            β”‚ HTTP API (192.168.122.1:1234)                  β”‚
β”‚            β”‚                                                  β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”β”‚
β”‚  β”‚ VM (Ubuntu Server 24.04)                               β”‚β”‚
β”‚  β”‚ IP: 192.168.122.XXX                                    β”‚β”‚
β”‚  β”‚                                                         β”‚β”‚
β”‚  β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚β”‚
β”‚  β”‚  β”‚ Docker Container                                  β”‚ β”‚β”‚
β”‚  β”‚  β”‚ (network_mode: host)                             β”‚ β”‚β”‚
β”‚  β”‚  β”‚                                                   β”‚ β”‚β”‚
β”‚  β”‚  β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”β”‚ β”‚β”‚
β”‚  β”‚  β”‚  β”‚ OpenClaw Gateway                            β”‚β”‚ β”‚β”‚
β”‚  β”‚  β”‚  β”‚ Port: <CUSTOM_PORT>                         β”‚β”‚ β”‚β”‚
β”‚  β”‚  β”‚  β”‚ WebSocket Server                            β”‚β”‚ β”‚β”‚
β”‚  β”‚  β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜β”‚ β”‚β”‚
β”‚  β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Network Flow:

  1. You connect to OpenClaw TUI via docker exec
  2. OpenClaw Gateway (in container) connects to LM Studio on host via 192.168.122.1:1234
  3. LM Studio runs Qwen model and returns responses
  4. OpenClaw displays responses in TUI

Security Layers:

  • βœ… Isolated VM with firewall (only SSH allowed from host)
  • βœ… No cloud APIs (all data stays local)
  • βœ… Docker container isolation
  • βœ… Custom non-standard port
  • βœ… Host firewall restricts LM Studio access to VM network only

Next Steps

Optional: Configure Messaging Channels

OpenClaw can integrate with:

  • Telegram
  • Discord
  • Slack
  • WhatsApp
  • Signal
  • iMessage

See OpenClaw docs for channel configuration.

Optional: Enable Sandbox Mode

For better security when running untrusted code, you can enable Docker sandbox mode:

  1. Install Docker CLI in the container (requires custom Dockerfile)
  2. Change sandbox: { mode: "off" } to sandbox: { mode: "all" } in config
  3. Restart gateway

This is more complex but provides stronger isolation.


Resources


Credits

Setup guide created based on practical installation experience with:

  • Ubuntu 24.04 (host + VM)
  • QEMU/KVM virtualization
  • LM Studio 0.4.2-2
  • OpenClaw 2026.2.6-3
  • Qwen2.5-Coder-14B-Instruct Q4_K_M

Last Updated: February 8, 2026

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment