If I build a file server for my uncle's office on FreeBSD running some samba shares, backed by ZFS, root on ZFS, takes ZFS snapshots every 15 minutes of the core data, and ZFS sends to my backup server off site by cron once in a while, would this be a reasonably set-it-and-forget-it-setup?
Short answer: yes, architecture-wise that’s a solid, grown-up setup — if you bolt on a few things so it can survive being ignored.
Let’s go through the important bits and what I’d add so you’re truly close to “set it and mostly forget it”.
Good:
- Root on ZFS: snapshots & easy rollback for system config.
- ZFS gives you checksums, self-healing (with redundancy), snapshots, send/recv. All the right primitives.
Make sure:
- Use mirrors or RAIDZ, never single disk.
- Enable regular scrubs (e.g. monthly or biweekly via cron).
- Tune ARC so Samba doesn’t starve (especially if low RAM).
Also good, if you manage retention.
Suggestions:
-
Per-dataset snapshot schedule. Common pattern:
- 15-minutely: keep 1 day
- Hourly: keep 1–2 days
- Daily: keep 30–90 days
- Weekly/monthly: keep long-term
-
Use a script or tool (e.g.
zfsnap,sanoid-style logic, or your own) to:- create snapshots with clear names
- prune old snapshots automatically
-
Confirm
com.sun:auto-snapshot=falseon datasets that shouldn’t be spammed (logs, tmp, etc).
This is what turns “snapshots” into “actual point-in-time protection” instead of “eternal 15-minute clutter”.
This is the critical part.
Key rules:
-
Incremental, not full Use
zfs send -ifrom the last replicated snapshot; script this. -
Pull if possible Let the backup server pull from the office (less damage if the file server is compromised).
-
Lock down backup side
- Receive dataset should be read-only and not exported over SMB.
- Use
zfs receive -u(don’t auto-mount). - Restrict SSH keys/commands.
-
Automated pruning off-site too
- Make sure retention there is independent of source.
- Keep longer history off-site than on primary if you can.
-
Test restores (non-optional)
- Regularly:
zfs send ... | zfs receiveinto a test dataset, or mount a received snapshot and confirm files look right. - Practice restoring a single file and a whole share.
- Regularly:
If you don’t test restore, the whole thing is vibes-based.
To make it boringly reliable for a small office:
-
Store shares on separate datasets (e.g.
tank/data/accounting,tank/data/shared). -
Use ZFS ACLs compatible with Windows ACLs (
aclmode=passthrough,aclinherit=passthroughor similar). -
Enable shadow copies via snapshots:
- Snapshot naming & frequency aligned with Samba
shadow_copy2config. - That gives users self-service “Previous Versions” — huge quality-of-life.
- Snapshot naming & frequency aligned with Samba
These are what bite people:
-
SMART & disk health
smartdenabled + email/Slack alerts.- Don’t rely on “I’ll check later”. You won’t. Automate.
-
ZFS health / pool status
- Cron job that runs
zpool status -xand yells if not “all pools are healthy”.
- Cron job that runs
-
Scrubs
- Cron: e.g.
zpool scrub tankmonthly. - Hook alerts if scrub finds errors.
- Cron: e.g.
-
OS & security updates
- FreeBSD
freebsd-updateand package updates on a schedule. - Reboot occasionally so you’re not sitting on a 3-year uptime + 3 years of vulns.
- FreeBSD
-
Backups of config
-
Backup:
/etc/usr/local/etc- Samba config
- any scripts (snapshot/replication)
-
Either in their own ZFS dataset (snapshotted & replicated) or pushed to git/private repo.
-
-
UPS
- Put the box on a UPS with clean shutdown via
nutorapcupsd. ZFS really wants that.
- Put the box on a UPS with clean shutdown via
-
Documentation
-
One short text file:
- IPs, root password procedure, how to restore a file, how replication works.
-
This is for Future You.
-
If you:
- use ZFS with redundancy,
- automate snapshots + pruning,
- automate incremental send/receive + pruning off-site,
- enable health monitoring + alerts,
- test restore occasionally,
- keep Samba/shares on sane datasets and set up shadow copies,
…then yes: that’s a very respectable low-maintenance file server for your uncle’s office.
If you’d like, next message I can sketch a concrete cron + config layout (datasets, snapshot names, send/recv scripts) tailored to this setup so you can drop it in.