Skip to content

Instantly share code, notes, and snippets.

@Zahorone
Forked from Lvdwardt/rustdesk.yml
Last active December 1, 2025 21:23
Show Gist options
  • Select an option

  • Save Zahorone/984e75c6a3e8c4a1706f81b6c5e55071 to your computer and use it in GitHub Desktop.

Select an option

Save Zahorone/984e75c6a3e8c4a1706f81b6c5e55071 to your computer and use it in GitHub Desktop.
Rustdesk + Nginx proxy manager + permanent setting of NFTables to suppress Docker bypass of UFW
version: '3'
networks:
rustdesk-net:
external: false
services:
nginx-proxy-manager:
image: 'jc21/nginx-proxy-manager:latest'
restart: unless-stopped
ports:
# These ports are in format <host-port>:<container-port>
- '80:80' # Public HTTP Port
- '443:443' # Public HTTPS Port
- '127.0.0.1:8081:81' # Admin Web Port. NPM bypass UFW. 127.0.0.1 is easy setting to fix it from localhost only.
# Add any other Stream port you want to expose
# - '21:21' # FTP
# Ports needed for Rustdesk:
- '21115:21115'
- '21116:21116'
- '21116:21116/udp'
- '21117:21117'
- '21118:21118'
- '21119:21119'
# Uncomment the next line if you uncomment anything in the section
# environment:
# Uncomment this if you want to change the location of
# the SQLite DB file within the container
# DB_SQLITE_FILE: "/data/database.sqlite"
# Uncomment this if IPv6 is not enabled on your host/
# DISABLE_IPV6: 'true'
volumes:
- ./data:/data
- ./letsencrypt:/etc/letsencrypt
networks:
- rustdesk-net
hbbs:
container_name: hbbs
image: rustdesk/rustdesk-server:latest
command: hbbs -r rustdesk.yourDomain.com:21117
volumes:
- ./data:/root
networks:
- rustdesk-net
depends_on:
- hbbr
restart: unless-stopped
hbbr:
container_name: hbbr
image: rustdesk/rustdesk-server:latest
command: hbbr
volumes:
- ./data:/root
networks:
- rustdesk-net
restart: unless-stopped
@Zahorone
Copy link
Author

Zahorone commented Oct 17, 2025

If you want to change NFT (IP) tables instead of changing line 16 - '127.0.0.1:8081:81' in docker-compose.yml
Here is how to do it.
Problem: DOCKER-USER Chain Rules Disappear After Reboot
After server restart, the DOCKER-USER chain is empty and custom firewall rules (e.g., blocking port 8081) disappear.

Root Cause
Docker completely resets its firewall on every startup. The daemon clears all custom rules and recreates the necessary chains (DOCKER, DOCKER-ISOLATION, DOCKER-USER). Since /etc/nftables.conf loads before the Docker service starts, Docker subsequently overwrites any custom modifications in DOCKER-USER.

Solution: Systemd Service with Automatic Rule Injection
Step 1: Create Firewall Rules Script

sudo nano /usr/local/bin/docker-firewall-rules.sh

Script content:

#!/bin/bash
#Add custom rules to DOCKER-USER after Docker starts
# Block port 8081 for IPv4
/usr/sbin/nft add rule ip filter DOCKER-USER tcp dport 8081 drop
# Block port 8081 for IPv6
/usr/sbin/nft add rule ip6 filter DOCKER-USER tcp dport 8081 drop

This version includes all the status messages so it’s easy to see which rules were removed,
left unchanged, newly added, or already present in your firewall.
It's fully idempotent and always logs every action

#!/bin/bash
PORTS=(82 8081 8888) # Add more ports to this list as needed

# First, get all ports currently present in chains (IPv4/IPv6)
# IPv4
for RULE_PORT in $(sudo nft list chain ip filter DOCKER-USER | grep -oP 'tcp dport \K[0-9]+'); do
  if [[ ! " ${PORTS[@]} " =~ " ${RULE_PORT} " ]]; then
    # Remove old rule
    INDEX=$(sudo nft list chain ip filter DOCKER-USER | grep -n "tcp dport $RULE_PORT drop" | awk -F: '{print $1}')
    sudo nft delete rule ip filter DOCKER-USER "$INDEX"
    echo "Removed rule for IPv4 port $RULE_PORT"
  else
    echo "Rule for IPv4 port $RULE_PORT left unchanged (present in PORTS)"
  fi
done

# IPv6
for RULE_PORT in $(sudo nft list chain ip6 filter DOCKER-USER | grep -oP 'tcp dport \K[0-9]+'); do
  if [[ ! " ${PORTS[@]} " =~ " ${RULE_PORT} " ]]; then
    INDEX=$(sudo nft list chain ip6 filter DOCKER-USER | grep -n "tcp dport $RULE_PORT drop" | awk -F: '{print $1}')
    sudo nft delete rule ip6 filter DOCKER-USER "$INDEX"
    echo "Removed rule for IPv6 port $RULE_PORT"
  else
    echo "Rule for IPv6 port $RULE_PORT left unchanged (present in PORTS)"
  fi
done

# Then, add ports idempotently
for PORT in "${PORTS[@]}"; do
  # IPv4
  if ! sudo nft list chain ip filter DOCKER-USER | grep -q "tcp dport $PORT drop"; then
    sudo nft add rule ip filter DOCKER-USER tcp dport $PORT drop
    echo "Added: drop for IPv4 port $PORT"
  else
    echo "Rule already exists for IPv4 port $PORT"
  fi
  # IPv6
  if ! sudo nft list chain ip6 filter DOCKER-USER | grep -q "tcp dport $PORT drop"; then
    sudo nft add rule ip6 filter DOCKER-USER tcp dport $PORT drop
    echo "Added: drop for IPv6 port $PORT"
  else
    echo "Rule already exists for IPv6 port $PORT"
  fi
done

Make it executable:

sudo chmod +x /usr/local/bin/docker-firewall-rules.sh

Step 2: Create Systemd Service Unit

sudo nano /etc/systemd/system/docker-firewall.service

Service file content:

[Unit]
Description=Docker Firewall Rules
After=docker.service
Requires=docker.service

[Service]
Type=oneshot
ExecStart=/usr/local/bin/docker-firewall-rules.sh
RemainAfterExit=yes

[Install]
WantedBy=multi-user.target

This service runs automatically after the Docker daemon starts, ensuring rules are added only after Docker creates its chains.

Step 3: Enable the Service

sudo systemctl daemon-reload
sudo systemctl enable docker-firewall.service
sudo systemctl start docker-firewall.service

Step 4: Verify Functionality

Check if the rule was added:

sudo nft list ruleset | grep -A 5 'chain DOCKER-USER'

Expected output:

chain DOCKER-USER {
		tcp dport 8081 drop
	}
}

table ip6 filter {
	chain ufw6-before-logging-input {
--
	chain DOCKER-USER {
		tcp dport 8081 drop
	}
}

Testing Persistence After Reboot

Reboot the server:

sudo reboot

After reboot, verify the rule persists:

sudo nft list ruleset | grep -A 5 'chain DOCKER-USER'

Troubleshooting
If rules still disappear, check service status:

sudo systemctl status docker-firewall.service
sudo journalctl -u docker-firewall.service -n 20

The service should show Active: active (exited) with status=0/SUCCESS.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment