Skip to content

Instantly share code, notes, and snippets.

@alexeygrigorev
Last active February 28, 2026 14:04
Show Gist options
  • Select an option

  • Save alexeygrigorev/5c1135fdfce97d3938a24c0f3dcc0ab2 to your computer and use it in GitHub Desktop.

Select an option

Save alexeygrigorev/5c1135fdfce97d3938a24c0f3dcc0ab2 to your computer and use it in GitHub Desktop.
Hetzner AX41-NVMe Setup Docs

Hetzner AX41-NVMe Setup

Setup docs for Hetzner AX41-NVMe dedicated server (AMD Ryzen 5 3600, 64 GiB RAM, 2x 512 GB NVMe).

export HETZNERIP="your ip address"

Order: B20260219-3371017-2946215 (19 Feb 2026) Server: AX41-NVMe #2855249, Helsinki (eu-west)

How to get a server like this

  1. Go to https://www.hetzner.com/dedicated-rootserver/matrix-ax/ and pick an AX model
  2. Pick a datacenter location (Helsinki, Falkenstein, etc.)
  3. After ordering, the server is delivered in Rescue System (usually within minutes)
  4. Hetzner emails you the IP, root password, and IPv6 subnet
  5. SSH into the rescue system and run installimage to install your OS
  6. After reboot, you have a clean Ubuntu server at the IP from the email

Pricing (as of Feb 2026): AX41-NVMe is ~38 EUR/mo for AMD Ryzen 5 3600, 64 GiB RAM, 2x 512 GB NVMe.

Reproducing the setup

Follow the docs below in order (02 through 10). Each file has copy-pasteable commands.

For agents: see gist.md for instructions on reading and updating this gist.

Docs

OS and Hardware

Hardware

  • CPU: AMD Ryzen 5 3600 — 6 cores / 12 threads @ 3.6 GHz
  • RAM: 64 GiB
  • Storage: 2x 512 GB Samsung NVMe (MZVL2512HCJQ-00B00)

1. Connect to the instance

Server was delivered in Hetzner Rescue System. Connected via SSH with the provided root credentials:

ssh root@${HETZNERIP}

2. Install Ubuntu

Ran installimage with the following config (no RAID, second NVMe left untouched):

DRIVE1 /dev/nvme0n1
DRIVE2 /dev/nvme1n1
FORMATDRIVE2 0

SWRAID 0
SWRAIDLEVEL 1

BOOTLOADER grub

HOSTNAME hetzner

PART swap swap 8G
PART /boot ext4 1024M
PART /    ext4 all

IMAGE /root/.oldroot/nfs/images/Ubuntu-2404-noble-amd64-base.tar.gz

Installed Ubuntu 24.04 (Noble) on nvme0n1. Rebooted into the new OS.

3. Mount second NVMe at /data

mkfs.ext4 -L data /dev/nvme1n1
mkdir -p /data
mount /dev/nvme1n1 /data
chown alexey:alexey /data
echo '/dev/nvme1n1 /data ext4 defaults 0 2' >> /etc/fstab

Disk layout:

  • nvme0n1 — 512 GB, Ubuntu (32G swap, 1G /boot, 444G /)
  • nvme1n1 — 512 GB, ext4, mounted at /data (445 GB usable, owned by alexey)

SSH and Users

1. Create SSH key and copy to server

Generated a dedicated ed25519 key:

ssh-keygen -t ed25519 -f ~/.ssh/id_hetzner -C "hetzner" -N ""

Copied the public key to the server:

ssh-copy-id -i ~/.ssh/id_hetzner.pub root@${HETZNERIP}

2. Create user alexey with sudo

useradd -m -s /bin/bash -G sudo alexey
echo 'alexey:<password>' | chpasswd
mkdir -p /home/alexey/.ssh
cp /root/.ssh/authorized_keys /home/alexey/.ssh/
chown -R alexey:alexey /home/alexey/.ssh
chmod 700 /home/alexey/.ssh
chmod 600 /home/alexey/.ssh/authorized_keys

3. Passwordless sudo

# as root
echo "alexey ALL=(ALL) NOPASSWD:ALL" > /etc/sudoers.d/alexey
chmod 440 /etc/sudoers.d/alexey

4. SSH config aliases

~/.ssh/config on the local machine:

Host hetzner
    HostName ${HETZNERIP}
    User alexey
    IdentityFile ~/.ssh/id_hetzner
    StrictHostKeyChecking no

ssh fhetzner — SSH with port forwarding (ttyd, dev servers, jupyter):

Host fhetzner
    HostName ${HETZNERIP}
    User alexey
    IdentityFile ~/.ssh/id_hetzner
    StrictHostKeyChecking no
    LocalForward 7681 localhost:7681
    LocalForward 8080 localhost:8080
    LocalForward 8888 localhost:8888
    ServerAliveInterval 60

Forwarded ports:

  • 7681 — ttyd web terminal
  • 8080 — dev server
  • 8888 — jupyter

Dev Tools

All commands run on the server as the alexey user unless noted otherwise.

nvm (+ Node + npm)

curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.1/install.sh | bash
source ~/.bashrc
nvm install --lts

This also installs nvm's PATH setup into ~/.bashrc automatically.

rvm + Ruby

gpg2 --keyserver hkp://keyserver.ubuntu.com \
  --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3 \
              7D2BAF1CF37B13E2069D6956105BD0E739499BDB
curl -sSL https://get.rvm.io | bash -s stable
source ~/.rvm/scripts/rvm
rvm get stable              # update rvm's ruby database
rvm install 3.3.7           # latest stable (rvm list known may be stale, use explicit version)
rvm use 3.3.7 --default

uv

curl -LsSf https://astral.sh/uv/install.sh | sh

Docker (run as root)

curl -fsSL https://get.docker.com | sh
usermod -aG docker alexey

GitHub CLI

sudo apt install -y gh
gh auth login

Claude Code (after nvm/node are installed)

npm install -g @anthropic-ai/claude-code

Node and Claude end up in the same bin dir (~/.nvm/versions/node/v24.13.1/bin/).

Shell Configuration (PATH)

The tool installers add their own entries to ~/.bashrc and ~/.bash_profile, but they can be messy (especially nvm's multi-line sourcing block). We replaced them with clean one-line-per-tool PATH exports.

Important: when managing dotfiles remotely from a Windows/MINGW shell, $HOME and $PATH get expanded locally before reaching the server. To avoid this, write the file via a python script on the server:

# SCP a python script to the server, then run it
scp fix_profile.py root@${HETZNERIP}:/tmp/
ssh root@${HETZNERIP} 'python3 /tmp/fix_profile.py'

The script writes ~/.profile with this content:

# Dev tools
export NVM_DIR="$HOME/.nvm"
export NODE_HOME="$NVM_DIR/versions/node/v24.13.1"
export PATH="$NODE_HOME/bin:$HOME/.local/bin:$HOME/bin:$PATH"

# rvm — static exports instead of `source ~/.rvm/scripts/rvm`
# avoids "GEM_HOME not set" warning in non-interactive shells
# to find paths for a new version: source ~/.rvm/scripts/rvm && rvm info
# update RUBY_VERSION after: rvm install <new> && rvm use <new> --default
export RUBY_VERSION="ruby-3.3.7"
export GEM_HOME="$HOME/.rvm/gems/$RUBY_VERSION"
export GEM_PATH="$HOME/.rvm/gems/$RUBY_VERSION:$HOME/.rvm/gems/$RUBY_VERSION@global"
export PATH="$HOME/.rvm/gems/$RUBY_VERSION/bin:$HOME/.rvm/gems/$RUBY_VERSION@global/bin:$HOME/.rvm/rubies/$RUBY_VERSION/bin:$HOME/.rvm/bin:$PATH"

Same block also appended to ~/.bashrc (after the non-interactive guard) so it works in both login and interactive shells.

Verified with su - alexey -c 'which node && which claude && which ruby && which uv && which docker':

/home/alexey/.nvm/versions/node/v24.13.1/bin/node
/home/alexey/.nvm/versions/node/v24.13.1/bin/claude
/home/alexey/.rvm/rubies/ruby-3.3.7/bin/ruby
/home/alexey/.local/bin/uv
/usr/bin/docker

Shell config chain

  • ~/.bash_profile — sources ~/.profile
  • ~/.profile — Dev tools + rvm exports + export BASH_ENV="$HOME/.bashenv"
  • ~/.bashrc — interactive shell config, claude aliases, same Dev tools block at the end
  • ~/.bashenv — rvm-only static exports (for non-interactive shells via BASH_ENV)

Networking (Firewall)

Only port 22 (SSH) is open. All other ports are blocked. Use SSH tunnels to access services:

# as root
ufw allow 22/tcp
ufw --force enable

# access remote services via SSH tunnel from local machine
ssh -L 8080:localhost:8080 hetzner

Or use ssh fhetzner which auto-forwards ports 7681, 8080, 8888 (see 02-ssh-and-users.md).

Web Terminal (ttyd + tmux)

sudo apt install -y tmux ttyd

# enable mouse scrolling
echo "set -g mouse on" >> ~/.tmux.conf
echo "termcapinfo xterm* ti@:te@" >> ~/.screenrc

# tmux bash completion (autocomplete session names with Tab)
# custom completion script installed to /usr/share/bash-completion/completions/tmux
# supports: tmux attach -t <Tab>, tmux kill-session -t <Tab>, etc.

# start ttyd with tmux (no login prompt)
tmux new-session -d -s main
nohup ttyd -p 7681 tmux attach -t main > /dev/null 2>&1 &

Access via SSH tunnel: ssh fhetzner, then open http://localhost:7681. Every reconnection shows the same tmux session where you left off.

See tmux101.md for tmux reference.

Claude Code Dotfiles

git clone https://github.com/alexeygrigorev/.claude.git ~/git/.claude
cd ~/git/.claude && bash install.sh

This creates symlinks for commands and skills in ~/.claude/ and adds aliases (c, cc, csp, ccsp, claude_init) to ~/.bashrc.

Copied ~/.claude/settings.json from the local machine:

{
  "attribution": { "commit": "" },
  "forceLoginMethod": "claudeai",
  "skipDangerousModePermissionPrompt": true
}

OpenCode Web

Running OpenCode as a web service on Hetzner, accessible via SSH tunnel.

Install

Node.js is already installed via nvm (see 03-dev-tools.md).

# Install opencode
sudo npm install -g opencode-ai

Config

Create ~/.config/opencode/opencode.json:

{
  "$schema": "https://opencode.ai/config.json",
  "model": "zai-anthropic/claude-opus-4-6",
  "provider": {
    "zai-anthropic": {
      "npm": "@ai-sdk/anthropic",
      "name": "Zai Anthropic",
      "options": {
        "baseURL": "https://api.z.ai/api/anthropic",
        "apiKey": "YOUR_API_KEY_HERE"
      },
      "models": {
        "claude-opus-4-6": {
          "name": "Claude Opus 4.6"
        }
      }
    }
  }
}

Commands and Skills (optional)

mkdir -p ~/.config/opencode/commands ~/.config/opencode/skills
# Copy your custom commands/skills to these directories

Systemd Service

Create ~/.config/systemd/user/opencode-web.service:

[Unit]
Description=OpenCode Web Service
After=network.target

[Service]
Type=simple
Environment=PATH=/home/alexey/.nvm/versions/node/v24.13.1/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
ExecStart=/home/alexey/.nvm/versions/node/v24.13.1/bin/opencode web --hostname 0.0.0.0 --port 4096
Restart=on-failure
RestartSec=10

[Install]
WantedBy=default.target

Enable and start:

systemctl --user daemon-reload
systemctl --user enable opencode-web
systemctl --user start opencode-web

Check status:

systemctl --user status opencode-web

Access via SSH Tunnel

The service is not exposed publicly (firewall blocks port 4096). Access it via SSH tunnel:

ssh -L 4096:localhost:4096 -N hetzner

Then open http://localhost:4096 in your browser.

First-Time Setup

After opening the web UI:

  1. Change language: Click the gear icon (settings) and select English. This is stored in browser localStorage, so it only needs to be done once per browser.

Management

systemctl --user status opencode-web   # check status
systemctl --user restart opencode-web  # restart
systemctl --user stop opencode-web     # stop
systemctl --user start opencode-web    # start

Logs:

journalctl --user -u opencode-web -f

Telegram Bots

Migrated from AWS EC2 bot-farm to Hetzner. Both bots live in ~/bots/ and run in tmux sessions.

Bots

  • todo — zapier-telegram-bot (Python 3.12, uv)
  • tomato — au-tomator-telegram-bot (Python 3.12, uv)

Setup

Each bot has:

  • .env with bot token and chat ID
  • .gitignore with .env
  • pyproject.toml pinning Python 3.12, python-telegram-bot==13.11, urllib3<2, setuptools<81
  • send.sh updated for uv:
#!/usr/bin/env bash
cd "$(dirname "$0")"
COMMAND=$1
uv run --env-file .env python send.py $COMMAND > send.log 2>&1

Dependency fixes

  • imghdr removed in Python 3.13 — pinned to Python 3.12 via uv python pin 3.12
  • urllib3 v2 removed contrib.appengine — pinned urllib3<2
  • setuptools 82+ removed pkg_resources — pinned setuptools<81

Crontab

Zapier bot scheduled messages (newsletters, slack invites, trello, slack dump). Migrated from bot-farm crontab. Check with crontab -l.

Current State

  • OS: Ubuntu 24.04 LTS (kernel 6.8.0-90-generic)
  • User: alexey (uid 1000, sudo + docker groups, passwordless sudo)
  • SSH: ssh hetzner (plain), ssh fhetzner (with port forwarding)
  • Firewall: UFW enabled, only port 22 open
  • Disk layout:
    • nvme0n1 — 512 GB, Ubuntu (32G swap, 1G /boot, 444G /)
    • nvme1n1 — 512 GB, ext4, mounted at /data (445 GB usable, owned by alexey)
  • Dev tools: nvm, node v24.13.1, rvm, ruby 3.3.7, uv, docker, gh 2.87.0, claude 2.1.47, tmux, ttyd
  • Bots (~/bots/, running in tmux sessions):
    • todo — zapier-telegram-bot (Python 3.12, uv)
    • tomato — au-tomator-telegram-bot (Python 3.12, uv)
  • Crontab: zapier bot scheduled messages (newsletters, slack invites, trello, slack dump)
  • Git repos (~/git/):
    • ai-engineering-buildcamp
    • ai-engineering-buildcamp-code
    • ai-shipping-labs
    • telegram-writing-assistant
    • .claude (dotfiles)

Hetzner AX41-NVMe Setup

Setup docs for Hetzner AX41-NVMe dedicated server (AMD Ryzen 5 3600, 64 GiB RAM, 2x 512 GB NVMe).

export HETZNERIP="your ip address"

Order: B20260219-3371017-2946215 (19 Feb 2026) Server: AX41-NVMe #2855249, Helsinki (eu-west)

Docs

tmux 101

Sessions

tmux                        # start new session
tmux new -s myname          # start named session
tmux ls                     # list sessions
tmux attach -t myname       # attach to session
tmux kill-session -t myname # kill session

Key bindings

All commands start with Ctrl+b (prefix), then a key:

Sessions

  • d — detach (session keeps running)
  • s — list/switch sessions
  • $ — rename session

Windows (tabs)

  • c — new window
  • n — next window
  • p — previous window
  • 0-9 — switch to window by number
  • , — rename window
  • & — close window

Panes (splits)

  • " — split horizontal
  • % — split vertical
  • arrow keys — move between panes
  • z — toggle pane fullscreen (zoom)
  • x — close pane
  • { / } — swap pane left/right
  • Ctrl+arrow — resize pane

Copy/scroll mode

  • [ — enter scroll mode (use arrow keys, Page Up/Down)
  • q — exit scroll mode

With set -g mouse on in ~/.tmux.conf, you can scroll with the mouse wheel.

Common workflows

# start a long-running process, detach, come back later
tmux new -s build
make all
# Ctrl+b, d (detach)
# ... later ...
tmux attach -t build

# multiple panes: editor + terminal
tmux
# Ctrl+b, " (split)
# top pane: vim file.py
# bottom pane: python file.py
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment