Skip to content

Instantly share code, notes, and snippets.

@rma92
Created November 10, 2025 14:19
Show Gist options
  • Select an option

  • Save rma92/1955a024fd728006de21c654811281a3 to your computer and use it in GitHub Desktop.

Select an option

Save rma92/1955a024fd728006de21c654811281a3 to your computer and use it in GitHub Desktop.

user

If I build a file server for my uncle's office on FreeBSD running some samba shares, backed by ZFS, root on ZFS, takes ZFS snapshots every 15 minutes of the core data, and ZFS sends to my backup server off site by cron once in a while, would this be a reasonably set-it-and-forget-it-setup?

assistant

Short answer: yes, architecture-wise that’s a solid, grown-up setup — if you bolt on a few things so it can survive being ignored.

Let’s go through the important bits and what I’d add so you’re truly close to “set it and mostly forget it”.


1. ZFS + root on ZFS

Good:

  • Root on ZFS: snapshots & easy rollback for system config.
  • ZFS gives you checksums, self-healing (with redundancy), snapshots, send/recv. All the right primitives.

Make sure:

  • Use mirrors or RAIDZ, never single disk.
  • Enable regular scrubs (e.g. monthly or biweekly via cron).
  • Tune ARC so Samba doesn’t starve (especially if low RAM).

2. Snapshots every 15 minutes

Also good, if you manage retention.

Suggestions:

  • Per-dataset snapshot schedule. Common pattern:

    • 15-minutely: keep 1 day
    • Hourly: keep 1–2 days
    • Daily: keep 30–90 days
    • Weekly/monthly: keep long-term
  • Use a script or tool (e.g. zfsnap, sanoid-style logic, or your own) to:

    • create snapshots with clear names
    • prune old snapshots automatically
  • Confirm com.sun:auto-snapshot=false on datasets that shouldn’t be spammed (logs, tmp, etc).

This is what turns “snapshots” into “actual point-in-time protection” instead of “eternal 15-minute clutter”.


3. ZFS send to off-site backup

This is the critical part.

Key rules:

  1. Incremental, not full Use zfs send -i from the last replicated snapshot; script this.

  2. Pull if possible Let the backup server pull from the office (less damage if the file server is compromised).

  3. Lock down backup side

    • Receive dataset should be read-only and not exported over SMB.
    • Use zfs receive -u (don’t auto-mount).
    • Restrict SSH keys/commands.
  4. Automated pruning off-site too

    • Make sure retention there is independent of source.
    • Keep longer history off-site than on primary if you can.
  5. Test restores (non-optional)

    • Regularly: zfs send ... | zfs receive into a test dataset, or mount a received snapshot and confirm files look right.
    • Practice restoring a single file and a whole share.

If you don’t test restore, the whole thing is vibes-based.


4. Samba specifics

To make it boringly reliable for a small office:

  • Store shares on separate datasets (e.g. tank/data/accounting, tank/data/shared).

  • Use ZFS ACLs compatible with Windows ACLs (aclmode=passthrough, aclinherit=passthrough or similar).

  • Enable shadow copies via snapshots:

    • Snapshot naming & frequency aligned with Samba shadow_copy2 config.
    • That gives users self-service “Previous Versions” — huge quality-of-life.

5. Things you must not forget (even in “set-and-forget”)

These are what bite people:

  1. SMART & disk health

    • smartd enabled + email/Slack alerts.
    • Don’t rely on “I’ll check later”. You won’t. Automate.
  2. ZFS health / pool status

    • Cron job that runs zpool status -x and yells if not “all pools are healthy”.
  3. Scrubs

    • Cron: e.g. zpool scrub tank monthly.
    • Hook alerts if scrub finds errors.
  4. OS & security updates

    • FreeBSD freebsd-update and package updates on a schedule.
    • Reboot occasionally so you’re not sitting on a 3-year uptime + 3 years of vulns.
  5. Backups of config

    • Backup:

      • /etc
      • /usr/local/etc
      • Samba config
      • any scripts (snapshot/replication)
    • Either in their own ZFS dataset (snapshotted & replicated) or pushed to git/private repo.

  6. UPS

    • Put the box on a UPS with clean shutdown via nut or apcupsd. ZFS really wants that.
  7. Documentation

    • One short text file:

      • IPs, root password procedure, how to restore a file, how replication works.
    • This is for Future You.


Verdict

If you:

  • use ZFS with redundancy,
  • automate snapshots + pruning,
  • automate incremental send/receive + pruning off-site,
  • enable health monitoring + alerts,
  • test restore occasionally,
  • keep Samba/shares on sane datasets and set up shadow copies,

…then yes: that’s a very respectable low-maintenance file server for your uncle’s office.

If you’d like, next message I can sketch a concrete cron + config layout (datasets, snapshot names, send/recv scripts) tailored to this setup so you can drop it in.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment