Modern Kubernetes clusters offer powerful primitives like FQDN‑based network policies (e.g., via Cilium, Calico, or Gatekeeper). These let you express rules such as “this workload may only talk to github.com and example.com” without worrying about IP churn, TLS hostname validation, or container‑level DNS quirks.
Docker, however, does not provide anything comparable out of the box.
This article documents a practical approach to implementing domain‑based egress control in plain Docker Compose, without modifying application containers, without terminating TLS, and without introducing heavyweight service meshes. It also covers the pitfalls we encountered—especially around QUIC/HTTP‑3—and compares our approach with the pattern suggested in the Creating a Simple but Effective Outbound "Firewall" using Vanilla Docker-Compose by Forest Johnson.
Create a Docker‑native mechanism that:
- Restricts outbound connections per domain, not per IP.
- Allows different containers to access different external domains.
- Preserves TLS end‑to‑end (no MITM, no certificate rewriting).
- Requires no modification of application containers.
- Uses only Docker Compose, no Kubernetes, no custom CNI.
- Works with multiple isolated internal networks.
- Uses a single egress container for all outbound traffic.
We set the following constraints:
- Each internal network should map to a specific external domain.
- Applications should connect using the real hostname (e.g.,
github.com). - The proxy must forward TCP traffic transparently.
- Application containers must remain unprivileged.
- The proxy container should run as a non‑root user.
- No TLS termination or certificate injection.
- No modification of
/etc/hosts,/etc/nsswitch.conf, or entrypoints inside app containers.
- No static IP hacks inside app containers.
- No cron jobs.
- No custom DNS servers inside apps.
- No patching of upstream images.
We built a unified TCP proxy container using GOST, combined with Docker internal networks and DNS aliases.
Each domain gets its own internal network:
github_net → github.com
example_net → example.com
quic_net → quic.nginx.org
These networks are marked internal: true, meaning they cannot reach the internet directly.
The proxy container receives a static IP on each internal network:
172.16.20.2 → github_net
172.16.30.2 → example_net
172.16.40.2 → quic_net
Inside each network, the proxy is aliased to the real domain:
github.com → 172.16.20.2
example.com → 172.16.30.2
quic.nginx.org → 172.16.40.2
Applications simply connect to:
curl https://github.com
…and Docker resolves that to the proxy container.
GOST listens on each internal IP and forwards traffic to the real domain:
-L=tcp://172.16.20.2:443/github.com:443
-L=tcp://172.16.30.2:443/example.com:443
-L=tcp://172.16.40.2:443/quic.nginx.org:443
To prevent the proxy from resolving its own aliases (which would cause loops), we mount the host’s /etc/resolv.conf into the proxy container:
/etc/resolv.conf:/etc/resolv.conf:ro
This forces the proxy to use real upstream DNS.
+----------------------+
| External |
| Internet |
|----------------------|
| github.com |
| example.com |
| quic.nginx.org |
+----------+-----------+
^
| (TCP forwarding)
|
+-----------+------------+
| Unified GOST Proxy |
|------------------------|
| 172.16.20.2 (github) |
| 172.16.30.2 (example) |
| 172.16.40.2 (quic) |
+-----------+------------+
^
+---------------------------+---------------------------+
| | |
| | |
+--------+--------+ +---------+---------+ +--------+--------+
| github_net | | example_net | | quic_nginx_net |
| (internal) | | (internal) | | (internal) |
|-----------------| |-------------------| |------------------|
| DNS alias: | | DNS alias: | | DNS alias: |
| github.com ---> | | example.com ----> | | quic.nginx.org ->|
| 172.16.20.2 | | 172.16.30.2 | | 172.16.40.2 |
+--------+--------+ +---------+---------+ +--------+--------+
| | |
| | |
+-----+-----+ +-----+-----+ +-----+-----+
| App 1 | | App 2 | | App 2 |
|-----------| |-----------| |-----------|
| curl https| | curl https| | curl https|
| github | | example | | quic |
+-----------+ +-----------+ +-----------+
Even if you configure custom DNS servers, Docker’s internal DNS will always resolve:
github.com → 172.16.20.2
inside the proxy container.
This causes infinite loops unless you override /etc/resolv.conf.
We initially attempted to support QUIC/HTTP‑3 by enabling UDP forwarding:
-L=udp://172.16.40.2:443/quic.nginx.org:443
This failed with errors like:
curl: (56) QUIC connection has been shut down
Reason: QUIC is stateful over UDP and requires NAT‑style forwarding.
GOST is a TCP/UDP proxy, not a NAT router, so QUIC flows break.
We ultimately dropped the HTTP/3 requirement for this solution.
Unlike the SequentialRead approach, we do not need:
/etc/hostshacks/etc/nsswitch.confhacks- static IPs inside app containers
- running apps as root
- patching entrypoints
Docker DNS aliasing handles everything cleanly.
Currently this approach only works with containers which are connected exclusively to internal networks. This means it can't work for containers which publish ports on the host. E.g. it can't be used to restrict outbound traffic from a traefik reverse proxy container with port 443 mapped to the host.
Although this solution works well for TCP‑only traffic, there is a clear path forward for supporting QUIC/HTTP‑3 and more advanced egress policies.
A privileged container running:
iptablesornftables- NAT (MASQUERADE)
- dnsmasq or CoreDNS for dynamic FQDN → IP sets
This would allow:
- QUIC/HTTP‑3 end‑to‑end
- dynamic FQDN‑based ACLs
- transparent L3/L4 routing
- no TLS termination
-
Docker Networking Documentation
https://docs.docker.com/network/ -
GOST (GO Simple Tunnel)
https://github.com/go-gost/gost -
SequentialRead Article
https://sequentialread.com/creating-a-simple-but-effective-firewall-using-vanilla-docker-compose/ -
QUIC / HTTP‑3 RFC
https://www.rfc-editor.org/rfc/rfc9000 -
Kubernetes FQDN Network Policies
https://docs.cilium.io/en/stable/security/dns/
Docker doesn’t make domain‑based egress control easy, but with internal networks, DNS aliases, and a unified TCP proxy, it’s possible to build a clean, maintainable solution that works across many applications without modifying them.
Dropping QUIC/HTTP‑3 support was a pragmatic compromise, but the architecture remains solid—and future work on NAT‑based routing could bring full protocol transparency.
If you’re running Docker Compose in production and want Kubernetes‑style FQDN policies, this pattern is a strong foundation.