This guide builds on the Perfect Media Server setup by using BTRFS for data drives and taking advantage of snapraid-btrfs to manage SnapRAID operations using read-only BTRFS snapshots where possible. One of the main limitations of SnapRAID is that there is a dependence on live data being continuously accessible and unchanging not only for complete parity sync purposes, but also for complete recovery in the event that a drive needs to be rebuilt from parity.
Note:
Recent updates in Snapper v0.11+ broke the snapraid-btrfs script by automorphism88. Use this patched version by D34DC3N73R until anamorphism88's version is fixed.
Using snapraid-btrfs, there is no requirement to stop any services or ensure that the live filesystem is free of any new files or changes to existing files.
From the snapraid-btrfs repo:
A: A major disadvantage of SnapRAID is that the parity files are not updated in realtime. This not only means that new files are not protected until after running
snapraid sync, but also creates a form of "write hole" where if files are modified or deleted, some protection of other files which share the same parity block(s) is lost until another sync is completed, since if other files need to be restored using thesnapraid fixcommand, the deleted or modified files will not be available, just as if the disk had failed, or developed a bad sector. This problem can be mitigated by adding additional parities, since SnapRAID permits up to six, or worked around by temporarily moving files into a directory that is excluded in your SnapRAID config file, then completing a sync to remove them from the parity before deleting them. However, this problem is a textbook use case for btrfs snapshots.
By using read-only snapshots when we do a
snapraid sync, we ensure that if we modify or delete files during or after the sync, we can always restore the array to the state it was in at the time the read-only snapshots were created, so long as the snapshots are not deleted until another sync is completed with new snapshots. This use case for btrfs snapshots is similar to using btrfs send/receive to back up a live filesystem, where the use of read-only snapshots guarantees the consistency of the result, while usingddwould require that the entire filesystem be mounted read-only to prevent corruption caused by writes to the live filesystem during the backup.
Another benefit of using BTRFS for the data drives is it allows for online drive replacements (i.e. upgrades to larger drives) using btrfs-replace. This means that the MergerFS pool can remain mounted with all data accessible for reads and writes while underyling data drives are being replaced. This guide includes instructions for replacing BTRFS data drives in the array.
This is based on the guide from the Self-Hosted Show Wiki with updates related to my (personal) layout as well as corrections/changes based on current software versions. This guide is based on Ubuntu 24.04.
Note
This guide uses UUID partition labels for mountpoints in /etc/fstab. You can find these usingblkid. Using UUIDs is robust and won't be impacted by changes in drive order. Consider switching your fstab over if you haven't done so already. You can find the UUIDs fromblkid | grep /dev/sdX1
Install BTRFS tools:
apt install btrfs-progs
Format each data drive using BTRFS:
mkfs.btrfs -L <data#> /dev/sdX1
Note
I use btrfs for parity as well. Original guide recommends ext4, which is fine but offers no advantage so why mix filesystems?
Which file-system is recommended for SnapRAID?
Format each parity drive using BTRFS:
mkfs.btrfs -L <parity#> /dev/sdY1
Create mountpoint for the parity volume:
mkdir -p /mnt/snapraid-parity/parity1
Add entry in /etc/fstab for the parity volume:
# Snapraid Parity
UUID=03b6ae59-65fb-488e-8f24-f5a32405c9ff /mnt/snapraid-parity/parity1 btrfs defaults 0 0
Note:
You don't have to create a subvolume for the parity drive. I did for consistency but I don't think it offers a significant advantage. It's up to you.
Mount the parity volume and create parity subvolume:
mount /mnt/snapraid-parity/parity1
btrfs subvolume create /mnt/snapraid-parity/parity1/parity
Add entry in /etc/fstab for the parity volume:
# Snapraid Parity
UUID=03b6ae59-65fb-488e-8f24-f5a32405c9ff /mnt/snapraid-parity/parity1 btrfs subvol=/parity,defaults 0 0
Unmount the parity volume:
umount /mnt/snapraid-parity/parity1
Update the entries in /etc/fstab to mount the parity subvolumes:
### /etc/fstab BTRFS parity subvolumes
UID=03b6ae59-65fb-488e-8f24-f5a32405c9ff /mnt/snapraid-parity/parity1 btrfs subvol=/parity,defaults 0 0
...
Mount the parity subvolume:
mount /mnt/snapraid-parity/parity1
To use BTRFS snapshots as this guide suggests, the data itself will reside in a BTRFS /data subvolume that needs to be created on each drive. The SnapRAID .content files do not need to be snapshotted and it is recommended that any .content files stored on the array be in a separate BTRFS subvolumes.
Create mountpoints for the BTRFS data and content volumes:
mkdir -p /mnt/snapraid-data/data{1,2,3,4}
mkdir -p /mnt/snapraid-content/content{1,2,3,4}
Add entries in /etc/fstab for the root filesystems:
### /etc/fstab BTRFS root filesystems
UUID=XXXXXXXX-YYYY-ZZZZ-AAAA-BBBBBBBBBBBB /mnt/snapraid-data/data1 btrfs defaults 0 0
UUID=YYYYYYYY-ZZZZ-AAAA-BBBB-CCCCCCCCCCCC /mnt/snapraid-data/data2 btrfs defaults 0 0
...
Mount each disk and create BTRFS data and content subvolumes on each root filesystem:
mount /mnt/snapraid-data/data1
btrfs subvolume create /mnt/snapraid-data/data1/data
btrfs subvolume create /mnt/snapraid-data/data1/content
mount /mnt/snapraid-data/data2
btrfs subvolume create /mnt/snapraid-data/data2/data
btrfs subvolume create /mnt/snapraid-data/data2/content
...
Unmount the drives:
umount /mnt/snapraid-data/data1
umount /mnt/snapraid-data/data2
Update the entries in /etc/fstab to mount the data subvolumes:
### /etc/fstab BTRFS data subvolumes
UUID=XXXXXXXX-YYYY-ZZZZ-AAAA-BBBBBBBBBBBB /mnt/snapraid-data/data1 btrfs subvol=/data,defaults 0 0
UUID=YYYYYYYY-ZZZZ-AAAA-BBBB-CCCCCCCCCCCC /mnt/snapraid-data/data2 btrfs subvol=/data,defaults 0 0
...
Add entries in /etc/fstab to mount the content subvolumes: (the UUID will be the same as for the data subvolumes)
### /etc/fstab BTRFS content subvolumes
UUID=XXXXXXXX-YYYY-ZZZZ-AAAA-BBBBBBBBBBBB /mnt/snapraid-content/content1 btrfs subvol=/content,defaults 0 0
UUID=YYYYYYYY-ZZZZ-AAAA-BBBB-CCCCCCCCCCCC /mnt/snapraid-content/content2 btrfs subvol=/content,defaults 0 0
...
Mount the data volumes:
mount /mnt/snapraid-data/data1
mount /mnt/snapraid-data/data2
mount /mnt/snapraid-content/content1
mount /mnt/snapraid-content/content2
...
The fstab steps for the MergerFS pool are changed slightly from the Perfect Media Server installation guide. Create a /mnt/storage mountpoint and follow fstab entries section of the guide:
### MergerFS mount
/mnt/snapraid-data/data* /media fuse.mergerfs allow_other,use_ino,cache.files=partial,dropcacheonclose=true,category.create=mfs,func.getattr=newest,fsname=mergerfs,cache.statfs=10,xattr=noattr,readahead=2048 0 0
Install SnapRAID as per the Perfect Media Server installation guide. To configure SnapRAID, ensure it points to the correct mount points that were created earlier. Using the example at Perfect Media Server as a basis, and updating for the mountpoints above results in the following:
# Snapraid configuration file
# Defines the file to use as parity storage
# It must NOT be in a data disk
# Format: "parity FILE [,FILE] ..."
parity /mnt/snapraid-parity/parity1/snapraid.parity
# Defines the files to use as additional parity storage.
# If specified, they enable the multiple failures protection
# from two to six level of parity.
# To enable, uncomment one parity file for each level of extra
# protection required. Start from 2-parity, and follow in order.
# It must NOT be in a data disk
# Format: "X-parity FILE [,FILE] ..."
#2-parity /mnt/snapraid-parity/parity2/snapraid.parity
#3-parity /mnt/diskr/snapraid.3-parity
#4-parity /mnt/disks/snapraid.4-parity
#5-parity /mnt/diskt/snapraid.5-parity
#6-parity /mnt/disku/snapraid.6-parity
# Defines the files to use as content list
# You can use multiple specification to store more copies
# You must have least one copy for each parity file plus one. Some more don't hurt
# They can be in the disks used for data, parity or boot,
# but each file must be in a different disk
# Format: "content FILE"
content /mnt/snapraid-content/content1/snapraid.content
content /mnt/snapraid-content/content2/snapraid.content
content /mnt/snapraid-content/content3/snapraid.content
content /mnt/snapraid-content/content4/snapraid.content
content /var/snapraid/snapraid.content
# Defines the data disks to use
# The name and mount point association is relevant for parity, do not change it
# WARNING: Adding here your /home, /var or /tmp disks is NOT a good idea!
# SnapRAID is better suited for files that rarely changes!
# Format: "data DISK_NAME DISK_MOUNT_POINT"
data d1 /mnt/snapraid-data/data1
data d2 /mnt/snapraid-data/data2
data d3 /mnt/snapraid-data/data3
data d4 /mnt/snapraid-data/data4
# Excludes hidden files and directories (uncomment to enable).
#nohidden
# Defines files and directories to exclude
# Remember that all the paths are relative at the mount points
# Format: "exclude FILE"
# Format: "exclude DIR/"
# Format: "exclude /PATH/FILE"
# Format: "exclude /PATH/DIR/"
exclude /.snapshots/
exclude snapraid.content
exclude *.!sync
exclude *.unrecoverable
exclude *.part
exclude backups/
exclude tmp.*
exclude /tmp/
exclude /lost+found/
exclude incomplete/
exclude cache/
exclude caches/
# Defines the block size in kibi bytes (1024 bytes) (uncomment to enable).
# WARNING: Changing this value is for experts only!
# Default value is 256 -> 256 kibi bytes -> 262144 bytes
# Format: "blocksize SIZE_IN_KiB"
#blocksize 256
# Defines the hash size in bytes (uncomment to enable).
# WARNING: Changing this value is for experts only!
# Default value is 16 -> 128 bits
# Format: "hashsize SIZE_IN_BYTES"
#hashsize 16
# Automatically save the state when syncing after the specified amount
# of GB processed (uncomment to enable).
# This option is useful to avoid to restart from scratch long 'sync'
# commands interrupted by a machine crash.
# It also improves the recovering if a disk break during a 'sync'.
# Default value is 0, meaning disabled.
# Format: "autosave SIZE_IN_GB"
autosave 1000
# Defines the pooling directory where the virtual view of the disk
# array is created using the "pool" command (uncomment to enable).
# The files are not really copied here, but just linked using
# symbolic links.
# This directory must be outside the array.
# Format: "pool DIR"
#pool /pool
# Defines a custom smartctl command to obtain the SMART attributes
# for each disk. This may be required for RAID controllers and for
# some USB disk that cannot be autodetected.
# In the specified options, the "%s" string is replaced by the device name.
# Refers at the smartmontools documentation about the possible options:
# RAID -> https://www.smartmontools.org/wiki/Supported_RAID-Controllers
# USB -> https://www.smartmontools.org/wiki/Supported_USB-Devices
#smartctl d1 -d sat %s
#smartctl d2 -d usbjmicron %s
#smartctl parity -d areca,1/1 /dev/sg0
#smartctl 2-parity -d areca,2/1 /dev/sg0
At this point, the system is set up to run native SnapRAID as described in the Perfect Media Server installation guide, the only difference being that the data is stored on BTRFS subvolumes.
The remainder of this guide will discuss how to leverage the BTRFS subvolumes with snapshots, and using those for SnapRAID parity.
Note:
Unfortunately the version of snapper in Ubuntu 24.04 is out of date. I was unable to get a binary version from snapper.io to work due to issues with the GPG key and dependencies, so I compiled it. Instructions for that are outside the scope of this guide, but I needed to add these packages to my Ubuntu 24.04 installation to get it to compile:
libboost-test-dev libboost-dev libboost-algorithm-dev docbook-xsl libmount-dev libdbus-1-dev libacl1-dev libxml2-dev libbtrfs-dev xsltproc libjson-c-dev libboost-thread-devBecause this is not a Ubuntu build, it expects to find its config in
/etc/sysconfig/snapper. I created a link to /etc/snapper.conf to retain the Ubuntu style.Templates are in
/usr/src/snapper/data.
In order to create and work with BTRFS snapshots, snapraid-btrfs uses Snapper. To get the newest version, you may need to compile it.
Install it as follows:
apt install snapper
Snapper requires that configuration profiles are created for each subvolume that requires snapshots. It has the ability to take new snapshots and cleanup old ones on a regular basis using timeline policies.
Note
For the purposes of this guide, the timeline-based snapshots are not required for snapraid-btrfs. snapraid-btrfs will create its own snapshots in conjunction with SnapRAID operations.
The default Snapper configuration template will be used as a basis for a minimal template for MergerFS data drives.
cd /etc/snapper/config-templates
cp default mergerfsdisk
To disable timeline-based snapshots, edit the /etc/snapper/config-templates/mergerfsdisk template as follows:
...
# create hourly snapshots
TIMELINE_CREATE="no"
...
Additional config options can be found at the snapper-configs man page.
Create Snapper profiles for each data subvolume created earlier using the mergerfsdisk template.
snapper -c data1 create-config -t mergerfsdisk /mnt/snapraid-data/data1
snapper -c data2 create-config -t mergerfsdisk /mnt/snapraid-data/data2
...
The resultant config files can be found at /etc/snapper/configs and the subvolumes that they relate to can be verified by running the following:
snapper list-configs
snapraid-btrfs config
Install snapraid-btrfs by cloning the Git repo and copying the snapraid-btrfs script to your system. Until the version by automorphism88 is fixed to support Snapper v0.11+, use the one from D34DC3N73R
# git clone https://github.com/automorphism88/snapraid-btrfs.git
git clone https://github.com/D34DC3N73R/snapraid-btrfs.git
cd snapraid-btrfs
cp snapraid-btrfs /usr/local/bin
chmod +x /usr/local/bin/snapraid-btrfs
Verify that snapraid-btrfs is successfully able to see Snapper configs for each data subvolume by running the following:
snapraid-btrfs ls
At this point, any snapraid <command> can be run as as snapraid-btrfs <command>. Depending on the command, snapraid-btrfs will either take a snapshot, use an existing latest snapshot, or use the live filesystem before passing that command on to SnapRAID for processing.
To automate daily parity sync and scrub operations using snapraid-btrfs, this guide uses snapraid-btrfs-runner based on the upstream snapraid-runner tool that is commonly used for SnapRAID automation.
snapraid-btrfs-runner conducts the same basic tasks as snapraid-runner, including diff, sync, and scrub operations.
Install snapraid-btrfs-runner by cloning the repository:
cd /usr/src
git clone https://github.com/fmoledina/snapraid-btrfs-runner.git`
Alternatively, this fork provides telegram and healthchecks integrations:
cd /usr/src
git clone https://github.com/tyjtyj/snapraid-btrfs-runner
I also updated the default for the conf file to point to a standard location. (line 280)
parser.add_argument("-c", "--conf",
default="/etc/snapraid-btrfs-runner.conf",
metavar="CONFIG",
help="Configuration file (default: %(default)s)")
Create a configuration file based on the example provided at /usr/src/snapraid-btrfs-runner/snapraid-btrfs-runner.conf.example. Copy this to /etc/snapraid-btrfs-runner.conf or someplace else (remember to use --conf <path-to-conf> if you do).
All of the snapraid-runner options are present, with additional options available for snapraid-btrfs and Snapper.
Required config parameters are as follows:
snapraid-btrfs.executable location (/usr/local/bin/snapraid-btrfs)
snapper.executable location (/usr/bin/snapper)
snapraid.executable location (/usr/local/bin/snapraid)
snapraid.config location (/etc/snapraid.conf)
Other config parameters of interest are as follows:
snapraid-btrfs.cleanup: Upon a successful run of snapraid-btrfs-runner, any interim snapshots created during the process will be removed, leaving only the snapraid-btrfs=synced snapshot. Defaults to true.
Copy snapraid-btrfs-runner.py and set executable:
sudo cp /usr/src/snapraid-btrfs-runner/snapraid-btrfs-runner.py /usr/local/bin
sudo chmod +x /usr/local/bin/snapraid-btrfs-runner.py
Scheduling can be set either via cron or systemd timers.\
#7 0 * * * root /usr/local/bin/snapraid-btrfs-runner.py -c /etc/snapraid-btrfs-runner.conf
BTRFS allows for online replacements of data drives using btrfs-replace. This allows for replacing data drives with larger ones without ever having to unmount the MergerFS array. In fact, MergerFS is ignorant to the operation since from its perspective, the member mount point always remains accessible while BTRFS conducts the drive replacement on the backend. The target drive needs to be the same size or larger than the source drive.
Use the following steps to replace a BTRFS drive in the above array, e.g. /mnt/snapraid-data/data1.
Warning
Verify that the correct drive is being replaced. Check both source and target drives using serial numbers in /dev/disk/by-uuid or similar unique identifiers prior to starting the replacement.
Check the current BTRFS filesystem state:
# sudo btrfs filesystem show /mnt/snapraid-data/data1
Label: 'data1' uuid: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
Total devices 1 FS bytes used 6.62TiB
devid 1 size 7.28TiB used 7.01TiB path /dev/sde1
Note
BTRFS filesystem-related commands can be carried out at any mountpoint that the filesystem is mounted on. This includes the root mountpoint or any subvolumes that are mounted. In this example, commands are executed at the/mnt/snapraid-data/data1subvolume mountpoint created above as it's already conveniently mounted.
On the new drive, create a filesystem table and primary partition. Assuming the replacement device is /dev/sdy1, start the BTRFS replacement with the following command:
btrfs replace start 1 /dev/sdy1 /mnt/snapraid-data/data1
where 1 refers to the devid from the above filesystem output. This command will produce no output, and BTRFS will have started the replacement in the background. If the drive has read errors, using the option -r may speed up the process.
To check the status of the replacement:
btrfs replace status -1 /mnt/snapraid-data/data1
which provides the current status and exits. To continuously monitor the replacement status, run the btrfs replace status command with the -1 omitted.
Details of the filesystem during the drive replacement will appear as follows:
# btrfs filesystem show /mnt/snapraid-data/data1
Label: 'data1' uuid: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
Total devices 2 FS bytes used 10.14TiB
devid 0 size 10.91TiB used 10.17TiB path /dev/sdy1
devid 1 size 10.91TiB used 10.17TiB path /dev/sds1
Once the drive replacement is successfully completed, the above btrfs replace status command would result in the following output:
# btrfs replace status /mnt/snapraid-data/data1
Started on 26.Jan 20:14:52, finished on 27.Jan 12:43:39, 0 write errs, 0 uncorr. read errs
Details of the filesystem after the drive replacement will appear as follows:
# btrfs filesystem show /mnt/snapraid-data/data1
Label: 'data1' uuid: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
Total devices 1 FS bytes used 10.14TiB
devid 1 size 10.91TiB used 10.17TiB path /dev/sdy1
If the target drive is larger than the source drive, the BTRFS filesystem will need to be resized to take advantage of the full space:
$ btrfs filesystem resize 1:max /mnt/snapraid-data/data1
Resize '/mnt/snapraid-data/data1' of '1:max'
where 1 refers to the devid from above (which is now the new device) and max indicates that the filesystem should use all the available space on the drive.
The final details of the filesystem will now appear as follows:
# btrfs filesystem show /mnt/snapraid-data/data1
Label: 'data1' uuid: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
Total devices 1 FS bytes used 10.14TiB
devid 1 size 12.73TiB used 10.17TiB path /dev/sdy1
At this point, the replacement operation is complete and the MergerFS pool should be able to see the additional space.