Skip to content

Instantly share code, notes, and snippets.

@taslabs-net
Last active March 5, 2026 16:19
Show Gist options
  • Select an option

  • Save taslabs-net/a845ca774385e2b6a3234e2148f78fc6 to your computer and use it in GitHub Desktop.

Select an option

Save taslabs-net/a845ca774385e2b6a3234e2148f78fc6 to your computer and use it in GitHub Desktop.
Cloudflare products and features used in cf-gitlab

Why Cloudflare for Self-Hosted GitLab

GitLab: FlarelyLegal/cf-gitlab

GitHub: FlarelyLegal/cf-gitlab

TL;DR: One Debian 13 LXC, 50 GB disk, 13 Cloudflare products, zero inbound ports. Storage never fills up (R2). Certs never expire (DNS-01). Login is gated by identity, not passwords (Access). Public downloads are cached at the edge (Workers). Backups go offsite automatically (R2). SSH works from anywhere (Tunnel). One .env file configures everything.


The Problem

You want to self-host GitLab on a small LXC in your homelab, but doing that properly means solving a dozen problems at once: exposing it to the internet without opening ports, keeping TLS certificates fresh, offloading storage so your disk doesn't fill up, protecting login pages from bots, caching large file downloads, and making SSH work from anywhere. Each of those problems usually means bolting on another tool, another service, another thing to maintain.

The idea behind cf-gitlab is that Cloudflare already solves all of these problems. You just need to wire them together.


"But I only access it from my LAN"

That's how most homelab GitLab instances start. And it's a fair question: why route through Cloudflare if you're sitting on the same network?

The answer is that most of what Cloudflare does here has nothing to do with external access. The benefits are the same whether your users are on the couch or on another continent:

  • R2 still keeps your disk from filling up. GitLab generates artifacts, LFS objects, uploads, packages, and diffs regardless of where your users are. Without object storage, all of that piles up on local disk. R2 offloads it. This is the single biggest operational win, and it has zero relationship to where traffic comes from.

  • Backups still go offsite. Daily database and repo backups upload to R2 automatically. If your Proxmox host dies, your data doesn't. That matters more when you're running on hardware in a closet than when you're on a cloud provider with redundant storage.

  • TLS still renews itself. DNS-01 challenges work regardless of network topology. No HTTP validation, no port 80 exposure, no "oops the cert expired and now git push is broken." This is pure automation, and it works the same from LAN or WAN.

  • Access still gates the login page. Even on a trusted LAN, an OIDC gate in front of GitLab means no password spray attacks, no credential stuffing, no "someone found the IP on Shodan." Your LXC's only open ports face the LAN, and even those are behind UFW rules, but Access adds identity-aware enforcement that a firewall can't do. And when you inevitably do want to access it remotely (from a coffee shop, from a phone, from a friend's house) it already works. No VPN to set up, no config to change.

  • The CDN Worker still caches. If your CI pipeline downloads the same artifact 50 times, that's 50 round trips to your LXC and 50 fetches from R2. With the CDN, it's one fetch and 49 cache hits from the nearest Cloudflare edge. Your LXC never notices. Even on a LAN, reducing load on a small container matters.

  • The tunnel still simplifies networking. No port forwarding, no split DNS, no NAT hairpin rules, no static IP requirements. The LXC makes an outbound connection and everything works. Move it to a different subnet, change your router, migrate to a new Proxmox host, and the tunnel reconnects. Nothing else changes. And the day you want external access, it's already there. No new config, no new security review, no opening ports on your router.

  • SSH still works through the tunnel. git clone, git push, ssh gitlab-lxc, all routed through cloudflared. You get the same consistent access path whether you're at home or traveling. No "use this SSH config at home, that one on the road."

The honest truth is that the tunnel is the least valuable piece for a LAN-only setup. Direct SSH and HTTPS to the LXC IP work fine. But everything else (R2, backups, TLS automation, Access, CDN caching, NTS) delivers the same value regardless of where traffic originates. And the tunnel costs nothing while giving you a clean, consistent access path that scales from "just me on my couch" to "my team across three time zones" without any reconfiguration.

The real question isn't "do I need external access." It's "do I want to solve storage, backups, TLS, identity, and caching all at once, with one stack, for free?" If yes, the tunnel is just a bonus.


What I Use and Why

Getting traffic in without opening ports

Cloudflare Tunnel is the front door. A single cloudflared process on the LXC creates an outbound connection to Cloudflare's edge. No port forwarding, no dynamic DNS, no public IP needed. GitLab, the container registry, GitLab Pages, and SSH all share the same tunnel on different hostnames. The LXC stays invisible to port scanners.

DNS records (CNAMEs pointed at the tunnel) make this all addressable. Everything is proxied through Cloudflare, so there's no direct path to the origin.

Keeping storage under control

This is the biggest practical win. GitLab generates a lot of data: CI artifacts, LFS objects, package registry blobs, user uploads, Pages deployments, MR diffs. On a stock install, all of that lands on local disk and grows until you run out of space.

R2 Object Storage moves all of it off the LXC. I use 10 separate buckets, one per object type, based on GitLab's object storage recommendations. The LXC disk only needs to hold Git repositories, PostgreSQL, and the GitLab binaries (~2.5 GB). A 50 GB disk is plenty.

R2 has no egress fees, which matters here because GitLab's proxy_download setting means every file download flows through the GitLab process. With other S3 providers, that egress adds up. With R2, it's free. Daily backups (database + repos) also upload to a dedicated R2 bucket, so disaster recovery doesn't depend on the LXC surviving.

Caching what's already public

When someone downloads a raw file or a release archive from GitLab, the request hits GitLab, which fetches from R2, and streams it back. That works, but it's slow and puts load on a small LXC for content that doesn't change.

The CDN Worker sits in front of GitLab on a separate hostname (cdn.gitlab.example.com). For public content (raw files, archives, anything without an auth token) it caches the response at the edge for 24 hours. Repeat downloads hit Cloudflare's cache, not GitLab, not R2. Authenticated requests pass through uncached.

The Worker doesn't read from R2 directly. It reaches GitLab through a VPC Service Binding, a private connection through Cloudflare's internal network into the same tunnel. The origin is never exposed publicly, even to the Worker.

Getting notified without running a mail server

GitLab has built-in email notifications, but they need an SMTP server. The CDN Worker sidesteps that: a POST /webhook/gitlab endpoint receives GitLab webhook events and sends formatted emails through Email Routing's send_email binding. Push, merge request, pipeline, deployment — nine event types get a clean plaintext email to one or more verified addresses. No Postfix, no SendGrid, no credentials to rotate. The feature is opt-in; without ENABLE_WEBHOOK_EMAIL=true, the endpoint doesn't exist.

Protecting the front door

Cloudflare Access (Zero Trust) handles authentication. GitLab is configured with OmniAuth OIDC pointing at a Cloudflare Access application. Users authenticate through Access before they ever see GitLab's login page. Once SSO is verified working, I lock it down further: disable signups, disable password login, enable auto-redirect so visitors go straight through the Access flow.

GitHub OAuth is a secondary provider, useful for importing repos.

WAF Custom Rules protect the CDN hostname: only allow GET/HEAD/OPTIONS on known-safe paths (/raw/, /-/archive/), block everything else. Bot blocking catches AI crawlers and scrapers.

Rate Limiting is belt-and-suspenders on the health endpoints. The tunnel already prevents direct origin access, but a rate limit (20 req/60s per IP) stops anyone from hammering /-/health through the tunnel.

TLS without thinking about it

DNS-01 challenges via the Cloudflare API let Certbot issue and renew Let's Encrypt certificates automatically, including wildcard certs for GitLab Pages. A zone-scoped API token on the LXC handles the challenge; no HTTP validation, no port 80 exposure needed. Certificates renew in the background with a cron job and a hook that reloads nginx.

SSH from anywhere

Git-over-SSH and admin SSH both work through the tunnel. ssh-config.sh sets up the local ~/.ssh/config with cloudflared access ssh as a ProxyCommand. From the user's perspective, git clone git@gitlab.example.com:group/project.git and ssh gitlab-lxc just work, whether you're on the LAN or across the world.

Keeping time honest

Cloudflare NTS (Network Time Security) is the time source via chrony. It's authenticated NTP, so the LXC knows the time responses actually came from Cloudflare, not from a spoofed source. LXC containers normally inherit time from the Proxmox host, but NTS gives the host itself an authenticated source, and if you ever move GitLab to a VM or bare metal, it's already configured. Minor detail, but it costs nothing and removes a class of problems.


The Result

A single Debian 13 LXC (8 CPU, 16 GB RAM, 50 GB disk) runs GitLab CE with:

  • No inbound ports open (tunnel handles everything)
  • No disk growth anxiety (R2 handles all object storage)
  • No manual cert renewals (DNS-01 + Certbot cron)
  • No exposed login page (Access OIDC gate)
  • Fast public downloads (CDN Worker + edge cache)
  • Event notifications without SMTP (Email Routing + Worker binding)
  • Daily offsite backups (DB + repos to R2)
  • SSH from anywhere (tunnel ProxyCommand)

Every script supports --dry-run. The whole thing deploys from a single ./deploy.sh after filling in .env. Validation (./validate.sh) checks 34 things end-to-end.


Product Summary

What I need Cloudflare product Why not something else
Internet exposure Tunnel No ports, no DDNS, no NAT hairpin
DNS Cloudflare DNS Proxied records, tunnel integration
Identity / SSO Access (Zero Trust) OIDC provider, policy engine, session management
Object storage R2 S3-compatible, zero egress fees
Edge caching Workers + Cache Rules Programmable, VPC binding to private origin
Private origin access VPC Service Binding Worker-to-tunnel without public exposure
Firewall WAF Custom Rules Path-level allow/block on CDN hostname
Abuse prevention Rate Limiting Per-IP per-colo limits on health endpoints
TLS automation DNS-01 via API Wildcard certs, no HTTP validation needed
SSH access Tunnel + cloudflared client Works from anywhere, no VPN needed
Event notifications Email Routing Worker send_email binding, no SMTP server
Time sync NTS Authenticated NTP, zero config

13 products. One LXC. One .env file.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment