Skip to content

Instantly share code, notes, and snippets.

@jimboy-701
Last active November 28, 2025 18:57
Show Gist options
  • Select an option

  • Save jimboy-701/18f89b58adf2c0fa25e20015b41f1f76 to your computer and use it in GitHub Desktop.

Select an option

Save jimboy-701/18f89b58adf2c0fa25e20015b41f1f76 to your computer and use it in GitHub Desktop.
Arch Linux with ZFS root, zfs-dkms, ZFSBootMenu, and Secure Boot enabled

Install Arch Linux with ZFS root filesystem, zfs-dkms, ZFSBootMenu, Pacman Auto-snapshots, Secure Boot enabled

Configure system BIOS

Go into your BIOS settings and make Secure Boot is either turned off or set to Audit Mode.

Decide which install method \ Media to use

Method 1. Use a Script to add ZFS support after booting the vanilla Arch install media Builds the necessary zfs modules within the Arch install enviroment.

Download the official Arch install media here:

Boot the install media and run the following command:

➜ curl -s https://raw.githubusercontent.com/eoli3n/archiso-zfs/master/init | bash

Method 2. Get the Arch Install media prebuilt with ZFS kernel support It's also an option to build your own Arch ISO with all the required ZFS support. However, that takes a considerable amount of time and beyond the scope of this guide. For now let's go with the below shortcut.

Method 3. (Alternative) ZFS automated install script There exists a Bash script that can automate the configuration and install of a ZFS root system with Arch Linux. However, as convenient as it sounds the script is limited in flexibility and scope. If you want to install a ZFS root system as quickly as possible and don't care about any particulars then take a good look at this Github page.

Boot into the Install Media Enviroment and set some preferences

Set the console font:

➜ setfont ter-128n

Verify DNS and internet connection is good (I recommend using a wired connection like ethernet for less hassle)

➜ ping yahoo.com

Setup the closest Arch repos according to your location or country:

➜ reflector --country Japan --latest 5 --sort rate --save /etc/pacman.d/mirrorlist

Verify zfs kernel modules are loaded:

➜ lsmod | grep zfs
zfs                  4218880  11
zunicode              339968  1 zfs
zzstd                 552960  1 zfs
zlua                  208896  1 zfs
zavl                   16384  1 zfs
icp                   331776  1 zfs
zcommon               110592  2 zfs,icp
znvpair               118784  2 zfs,zcommon
spl                   122880  6 zfs,icp,zzstd,znvpair,zcommon,zavl

Partition the storage devices

We'll be using the UEFI boot method which requires us to create a EFI partition to launch Linux. Run the following command to identify the storage devices and partition info:

➜ lsblk
NAME         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
mmcblk0      179:0    0  29.1G  0 disk
nvme0n1      259:0    0 465.8G  0 disk

*Note: From a security standpoint it is possible to have the EFI (boot) partition on a entirely different storage device than the actual OS root filesystem, eg., mmcblk0 or the Micro SD card. Skip below to "As mentioned above" if you're interested in this setup.

The M.2 device will be our target install storage device (nvme0n1). Save the storage path into a variable to make things easy:

➜ DISK=/dev/nvme0n1

You can use any partitioning tool... In this example we'll use sgdisk one-liners to create the partition scheme we need.

➜ sgdisk --zap-all $DISK
➜ sgdisk -n1:1M:+256M -t1:EF00 $DISK
➜ sgdisk -n2:0:0 -t2:BF00 $DISK

Note: If you're using a Libvirt \ Qemu virtual machine the above commands may fail with an obscure error. If this is the case try using gdisk in interactive mode to manually create the partition scheme. Also, don't forget to label the partitions correctly: EFI Boot Partition=ef00, Linux Install Partition=bf00

➜ gdisk /dev/vda

Now lets verify the partitions are good before moving on:

➜ lsblk
NAME         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
mmcblk0      179:0    0  29.1G  0 disk
nvme0n1      259:0    0 465.8G  0 disk
├─nvme0n1p1  259:1    0   256M  0 part
└─nvme0n1p2  259:2    0 465.5G  0 part

As mentioned above, in this section we're going to create the EFI partition on a seperate storage device or an SD Card for additional security. The mmcblk0 used here is a SanDisk Extreme Pro (Spec: A1, U3, V30, 32Gb) Micro SD card fitted into the laptop's media card slot reader... but you don't need to use anything fancy (it can even be a USB flash drive). Just be sure that the storage device or media used is something reliable (it is housing our EFI files after all). It's also a good idea to create a backup image of this entire storage device in case of lost, theft, corruption, swollowed, eaten, or device failure. Check the Post installation Tips section later for further guidance.

➜ DISK0=/dev/mmcblk0
➜ DISK1=/dev/nvme0n1
➜ sgdisk --zap-all $DISK0
➜ sgdisk --zap-all $DISK1
➜ sgdisk -n1:1M:+256M -t1:EF00 $DISK0
➜ sgdisk -n2:0:0 -t2:BF00 $DISK1

➜ lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
mmcblk0     179:0    0  29.7G  0 disk
└─mmcblk0p1 179:1    0   512M  0 part
nvme0n1     259:0    0 238.5G  0 disk
└─nvme0n1p1 259:1    0 238.5G  0 part

Creating required filesystems and mountpoints

Create the EFI (boot) filesystem on the first partition:

➜ mkfs.vfat -v -F 32 -n EFI /dev/nvme0n1p1

Or if you're going the seperate storage \ removeable device route:

➜ mkfs.vfat -v -F 32 -n EFI /dev/mmcblk0p1

Create our zroot pool. Skip to the next section of commands if you're going with encryption.
Note: The lowercase and capital o's matter in this command. If you're not using an actual sata SSD drive you can set off the autotrim option (I'm using a M.2\NVME so I think this should be left off), and if you're using a virtual enviroment you can probably omit the compression option as well.

➜ zpool create -f \
-o ashift=12 \
-o autotrim=off \
-O devices=off \
-O relatime=on \
-O xattr=sa \
-O acltype=posixacl \
-O dnodesize=legacy \
-O normalization=formD \
-O compression=lz4 \
-O canmount=off \
-O mountpoint=none \
-R /mnt zroot /dev/nvme0n1p2

At this point you may want to encrypt the rootfs and let zfs prompt you for a passphrase (or use a keyfile) to unlock zroot. Especially if you decided to use a seperate boot device. Reference "A quick-start guide to OpenZFS native encryption" link in the Sources section for further guidance.

➜ zpool create -f \
-o ashift=12 \
-o autotrim=off \
-O devices=off \
-O relatime=on \
-O xattr=sa \
-O acltype=posixacl \
-O dnodesize=legacy \
-O normalization=formD \
-O compression=lz4 \
-O canmount=off \
-O mountpoint=none \
-o encryption=aes-256-gcm \
-o keyformat=passphrase \
-o keylocation=prompt \
-R /mnt zroot /dev/nvme0n1p2

Again, if you're going the seperate storage \ removeable device route replace that last line with:

-R /mnt zroot /dev/nvme0n1p1

And if you have multiple unused M.2 devices, the following command will configure a RAID0 setup (Assuming you've partitioned these devices the same way described in the previous step above.):

-R /mnt zroot /dev/nvme0n1p1 /dev/nvme1n1p1 /dev/nvme2n1p1

Note: To learn how to setup other RAID levels, check out the "Programster's Blog" in the Sources & Software section at the very bottom of this guide.

Moving on, let's verify the pool:

➜ zpool status
  pool: zroot
 state: ONLINE
config:

        NAME                                                          STATE     READ WRITE CKSUM
        zroot                                                         ONLINE       0     0     0
          nvme-SSDPEKKF256G8_NVMe_INTEL_256GB_PHHH924502N1256B-part1  ONLINE       0     0     0

errors: No known data errors

➜ zfs get all zroot
NAME   PROPERTY              VALUE                  SOURCE
zroot  type                  filesystem             -
zroot  creation              Tue Sep 20  0:53 2022  -
zroot  used                  172G                   -
zroot  available             58.7G                  -
zroot  referenced            96K                    -
zroot  compressratio         1.47x                  -
zroot  mounted               no                     -
zroot  quota                 none                   default
zroot  reservation           none                   default
zroot  recordsize            128K                   default
zroot  mountpoint            none                   local
zroot  sharenfs              off                    default
zroot  checksum              on                     default
zroot  compression           zstd                   local
zroot  atime                 on                     default
zroot  devices               on                     default
zroot  exec                  on                     default
zroot  setuid                on                     default
zroot  readonly              off                    default
zroot  zoned                 off                    default
zroot  snapdir               hidden                 default
zroot  aclmode               discard                default
zroot  aclinherit            restricted             default
zroot  createtxg             1                      -
zroot  canmount              on                     default
zroot  xattr                 sa                     local
zroot  copies                1                      default
zroot  version               5                      -
zroot  utf8only              off                    -
zroot  normalization         none                   -
zroot  casesensitivity       sensitive              -
zroot  vscan                 off                    default
zroot  nbmand                off                    default
zroot  sharesmb              off                    default
zroot  refquota              none                   default
zroot  refreservation        none                   default
zroot  guid                  1141454373434174930    -
zroot  primarycache          all                    default
zroot  secondarycache        all                    default
zroot  usedbysnapshots       0B                     -
zroot  usedbydataset         96K                    -
zroot  usedbychildren        172G                   -
zroot  usedbyrefreservation  0B                     -
zroot  logbias               latency                default
zroot  objsetid              54                     -
zroot  dedup                 off                    default
zroot  mlslabel              none                   default
zroot  sync                  standard               default
zroot  dnodesize             legacy                 default
zroot  refcompressratio      1.00x                  -
zroot  written               96K                    -
zroot  logicalused           249G                   -
zroot  logicalreferenced     42K                    -
zroot  volmode               default                default
zroot  filesystem_limit      none                   default
zroot  snapshot_limit        none                   default
zroot  filesystem_count      none                   default
zroot  snapshot_count        none                   default
zroot  snapdev               hidden                 default
zroot  acltype               posix                  local
zroot  context               none                   default
zroot  fscontext             none                   default
zroot  defcontext            none                   default
zroot  rootcontext           none                   default
zroot  relatime              on                     local
zroot  redundant_metadata    all                    default
zroot  overlay               on                     default
zroot  encryption            off                    default
zroot  keylocation           none                   default
zroot  keyformat             none                   default
zroot  pbkdf2iters           0                      default
zroot  special_small_blocks  0                      default

Additionally, if you turned encryption on \w passphrase:

zroot  encryption            aes-256-gcm            -
zroot  keylocation           file:///...            local
zroot  keyformat             raw                    -
zroot  encryptionroot        zroot                  -
zroot  keystatus             available              -

Create filesystem mountpoints and then import \ export test:

➜ zfs create zroot/ROOT
➜ zfs create -o canmount=noauto -o mountpoint=/ zroot/ROOT/arch
➜ zfs create -o mountpoint=/home zroot/home
➜ zpool export zroot
➜ zpool import -d /dev/nvme0n1p2 -R /mnt zroot -N

Tip: If you're planning to run Docker, Libvirt or something else that could get intensive, it might be prudent to add more mountpoints like so:

➜ zfs create -o mountpoint=/var/log zroot/var/log
➜ zfs create -o mountpoint=/var/lib/docker zroot/var/lib/docker
➜ zfs create -o mountpoint=/var/lib/libvirt zroot/var/lib/libvirt

Mounting everything together

Mount our mount points:

➜ zfs mount zroot/ROOT/arch
➜ zfs mount -a
➜ mkdir -p /mnt/{etc/zfs,boot/efi}
➜ mount /dev/nvme0n1p1 /mnt/boot/efi

Or if you're going the other setup, replace the last command above with this one here:

➜ mount /dev/mmcblk0p1 /mnt/boot/efi

Check if zfs mounted successfully:

➜ mount | grep mnt
zroot/ROOT/arch on /mnt type zfs (rw,relatime,xattr,posixacl)
zroot/home on /mnt/home type zfs (rw,relatime,xattr,posixacl)

If you went the other setup additionally it should show the SD Card mounted

zroot/ROOT/arch on /mnt type zfs (rw,relatime,xattr,posixacl)
zroot/home on /mnt/home type zfs (rw,relatime,xattr,posixacl)
zroot/boot on /mnt/boot/efi type vfat (rw,relatime,fmask=0022,dmask0022,codepage=437,iocharset=ascii,shortname=mixed,utf8,errors=remount-ro)

Make sure df shows all the mount points (including any additional ones you may have created earlier... log, docker, libvirt etc.)

➜ df -k
zroot/ROOT/arch... /mnt
zroot/home...      /mnt/home
/dev/nvme0n1p1...  /boot/efi

Or if booting a removeable device (SD Card)...

zroot/ROOT/arch... /mnt
zroot/home...      /mnt/home
/dev/mmcblk0p1...  /boot/efi

Configure the ZFS pool

If all is good, move on to set bootfs and create zfs cache file:

➜ zpool set bootfs=zroot/ROOT/arch zroot
➜ zpool set cachefile=/etc/zfs/zpool.cache zroot
➜ cp -v /etc/zfs/zpool.cache /mnt/etc/zfs

Install the essential packages for the base Arch linux system

Install packages with pacstrap:

➜ pacman -Syy
➜ pacstrap /mnt base base-devel linux linux-headers linux-firmware intel-ucode amd-ucode fwupd udisks2 sbctl efitools dkms efibootmgr man-db man-pages git rust cargo nano

Generate and configure the filesystem table file.
You need to comment out all the entries (by adding # to the beginning of the line) except the one entry containing /boot/efi.

➜ genfstab -U -p /mnt >> /mnt/etc/fstab
➜ nano /mnt/etc/fstab

Copy over the dns settings to the new system:

➜ cp -v /etc/resolv.conf /mnt/etc

Chroot into the new system and perform optional tweaks before compilation.

As a long time Arch User, editing the compile flags is a good idea as you'll be compiling some of the best software available from the AUR.

➜ arch-chroot /mnt
➜ nano /etc/makepkg.conf

In /etc/makepkg.conf on the "CFLAGS" line, remove "-march" and "-mtune" add replace with "-march=native". Scroll down to the line with MAKEFLAGS="-j2" and change that to MAKEFLAGS="-j$(nproc)". Near the bottom of the file look for the compression options, add "--threads=0" to the COMPRESSZST and COMPRESSXZ commands.
Consult the Arch Linux wiki for additional guidance.

➜ nano /etc/makepkg.conf
...
CFLAGS="-march=native -O2 -pipe -fno-plt"
...
RUSTFLAGS="-Copt-level=2 -Ctarget-cpu=native -Cforce-frame-pointers=yes"
...
MAKEFLAGS="-j$(nproc)"
...
COMPRESSZST=(zstd -c -z -q --threads=0 -)
COMPRESSXZ=(xz -c -z --threads=0 -)
...

Create a regular User account and give sudo permissions

➜ useradd -m username
➜ passwd username
➜ usermod -aG users,sys,adm,log,scanner,power,rfkill,video,storage,optical,lp,audio,wheel username
➜ id username

Add "%wheel ALL=(ALL) ALL" without quotes using nano:

➜ nano /etc/sudoers.d/username

Su into your new User account and build \ install Paru (or any favorite Pacman Wrapper).

For pacman wrappers I've used Yay for a while but after trying Paru out I just never went back. I encourage you to use any wrapper you're comfortable or familiar with. In this case we'll go with Paru. If you've never used Paru before check out this cool cheat sheet here:

➜ su username
➜ sudo pacman -Syy
➜ git clone https://aur.archlinux.org/paru.git && cd paru
➜ makepkg -si

Now use Paru to build and install zfs kernel module

➜ cd
➜ paru -S zfs-dkms

Install some additional base packages

This is about the bare minimal packages needed. You can even omit openssh if you don't plan on doing any remote management and terminus-font. Note: You can use dhcpcd in place of networkmanager for a less system resource alternative.

➜ paru -S networkmanager reflector openssh terminus-font

In addition, for a more complete Desktop experience I recommend install the following packages.
Note: I placed a star next to packages that will require some manual intervention to get working. Look up the package in the Archlinux Wiki for guidance.

xdg-user-dirs xdg-utils
bash-completion tmux
inetutils net-tools dnsutils avahi* nss-mdns ntp*
firewalld* apparmor*
tlp* acpi_call acpid*
bluez* bluez-utils
gpm*
cups*
alsa-utils pipewire pipewire-alsa pipewire-pulse pipewire-jack sof-firmware
smartmontools* lm_sensors*
curl wget lsftp rsync
dmidecode lsof htop
fcron
zsh* grml-zsh-config fd fzf*
exfatprogs ntfs-3g dosfstools
neovim

Enable base services

➜ systemctl enable zfs-import-cache
➜ systemctl enable zfs-import.target
➜ systemctl enable zfs-mount
➜ systemctl enable zfs-share
➜ systemctl enable zfs-zed
➜ systemctl enable zfs.target
➜ systemctl enable NetworkManager
➜ systemctl enable reflector.timer

Generate your hostid file and write it down

➜ sudo zgenhostid $(hostid)
➜ hostid

Set time zone and generate locales

➜ sudo ln -sf /usr/share/zoneinfo/Pacific/Guam /etc/localtime
➜ hwclock --systohc

Generate your locales, edit /etc/locale.gen and uncomment all locales you need, for example en_US.UTF-8.

➜ nano /etc/locale.gen
➜ locale-gen

Configure the network

Set your hostname by writing it to /etc/hostname:

➜ echo "hostname" > /etc/hostname

Then edit /etc/hosts, where is your previously chosen hostname:

➜ echo "127.0.0.1 localhost" >> /etc/hosts
➜ echo "::1       localhost" >> /etc/hosts
➜ echo "127.0.0.1 hostname.localdomain hostname" >> /etc/hosts

Configure reflector

Edit the reflector configuration file by adding the correct country and make sure you have "--sort rate".

➜ nano /etc/xdg/reflector/reflector.conf

Set default console font

This will make your console look cleaner... However, I urge you to customize these settings to your liking.

➜ sudo nano /etc/vconsole.conf
FONT=ter-128n

Install the EFI bootloader: ZFSBootMenu

Note: After performing the commands below to download the zfsbootmenu EFI file, you may have to go into your BIOS (Boot) settings and specifically select this file to boot. Basically telling your BIOS, "Hey this is my boot loader. Use this file to help load the OS!".

➜ mkdir -p /boot/efi/EFI/zbm
➜ wget https://get.zfsbootmenu.org/latest.EFI -O /boot/efi/EFI/zbm/zfsbootmenu.EFI
➜ efibootmgr --disk /dev/nvme0n1 --part 1 --create --label "ZFSBootMenu" --loader '\EFI\zbm\zfsbootmenu.EFI' --unicode "spl_hostid=$(hostid) zbm.timeout=3 zbm.prefer=zroot zbm.import_policy=hostid" --verbose

If you're booting using an external storage device route use this command instead:

➜ efibootmgr --disk /dev/mmcblk0 --part 1 --create --label "ZFSBootMenu" --loader '\EFI\zbm\zfsbootmenu.EFI' --unicode "spl_hostid=$(hostid) zbm.timeout=3 zbm.prefer=zroot zbm.import_policy=hostid" --verbose

Set kernel boot parameters:

➜ zfs set org.zfsbootmenu:commandline="noresume init_on_alloc=0 rw spl.spl_hostid=$(hostid)" zroot/ROOT

Edit the mkinitcpio.conf file

Make sure the HOOKS line are like the following and edit the COMPRESSION_OPTIONS near the bottom of the file.
"HOOKS=(base udev autodetect modconf block keyboard zfs filesystem)"
"COMPRESSION_OPTIONS=(-c -z -q --threads=0 -)"

➜ nano /mnt/etc/mkinitcpio.conf
➜ mkinitcpio -P

Note: There are a lot of settings you can change in this file that can help further improve functionality, performance, and better tailor Arch Linux to your hardware. As usual, consult the Arch Linux wiki for guidance.

Assign root password:

➜ passwd

And finally exit the Chroot enviroment by pressing Ctr + d or

➜ exit

Unmount, reboot, pray, and test

➜ umount /mnt/boot/efi
➜ zfs umount -a
➜ zpool export zroot
➜ reboot

Setup Secure Boot & TPM

This part is completely optional but I believe taking the opportunity now to update your BIOS version in this section could save you some time later... And you don't have to use this method if you prefer another way.

➜ sudo fwupdmgr get-devices
Dell Inc. Latitude 7400
│
├─Cannon Point-LP LPC Controller:
│     Device ID:          71b31258b13a4b2793e529856a190f8fb02ad151
│     Current version:    30
│     Vendor:             Intel Corporation (PCI:0x8086)
│     GUIDs:              e9af651b-e3d5-55ec-b0f2-77c927119317 ← PCI\VEN_8086&DEV_9D84
│                         2b36d90a-fb29-585b-807c-ad4836cb3256 ← PCI\VEN_8086&DEV_9D84&REV_30
│                         a876086d-1455-512b-9346-8d2bb23bd445 ← PCI\VEN_8086&DEV_9D84&SUBSYS_102808E1
│                         88e1319f-1116-5e38-8354-9f64be2ea73d ← PCI\VEN_8086&DEV_9D84&SUBSYS_102808E1&REV_30
│                         2d80f689-0b5e-5c4b-b6df-bd767f6e9f05 ← INTEL_SPI_CHIPSET\ID_PCH300
│     Device Flags:       • Internal device
│                         • Cryptographic hash verification is available
...
➜ sudo fwupdmgr refresh
Updating lvfs
Downloading…             [************************************** ]
Successfully downloaded new metadata: 1 local device supported
➜ sudo fwupdmgr get-updates
Devices with no available firmware updates:
 • SSDPEKKF256G8 NVMe INTEL 256GB
 • TPM
 • Thunderbolt host controller
 • UEFI dbx
 ...
➜ sudo fwupdmgr update
Devices with no available firmware updates:
 • SSDPEKKF256G8 NVMe INTEL 256GB
 • TPM
 • Thunderbolt host controller
 • UEFI dbx
╔══════════════════════════════════════════════════════════════════════════════╗
║ Upgrade System Firmware from 1.26.0 to 1.41.1?                               ║
╠══════════════════════════════════════════════════════════════════════════════╣
║ This stable release fixes the following issues:                              ║
║                                                                              ║
║ • This release contains security updates as disclosed in the Dell            ║
║ Security Advisory.                                                           ║
║                                                                              ║
║ Latitude 7400 must remain plugged into a power source for the duration of    ║
║ the update to avoid damage.                                                  ║
╚══════════════════════════════════════════════════════════════════════════════╝
Perform operation? [Y|n]:Y
Waiting…                 [***************************************] Less than one minute remaining…
Successfully installed firmware
Do not turn off your computer or remove the AC adapter while the update is in progress.
An update requires a reboot to complete. Restart now? [y|N]: Y

Note: Getting fwupdmgr to update your BIOS version seemlessly without first needing to disable Secure Boot requires some additional setup and configurations which I'm still investigating. I believe you'll need to install shim-signed from the AUR. Once I can confirm I'm able to do it successfully myself I'll update this guide with the instructions. Please let me know if you've already got this setup and fully working.

Moving on... check for TMP requirements and the current status of Secure Boot

➜ bootctl
System:
      Firmware: UEFI 2.80 (American Megatrends 5.26)
 Firmware Arch: x64
   Secure Boot: disabled (audit)
  TPM2 Support: yes
  Measured UKI: yes
  Boot into FW: supported
  ...
 
➜ sbctl status
Installed:      ✓ sbctl is installed
Owner GUID:     bd4a58ba-6a8d-4ee3-866e-1e9f1eb7e690
Setup Mode:     ✗ Enabled
Secure Boot:    ✗ Disabled
Vendor Keys:    none

Running the commands above you want to verify you have TMP2 checked and "Secure Boot:" showing "disabled (audit)". If it's showing something different than the output displayed here, you may need to reboot into computer's BIOS settings and verify you have the TPM chip enabled and Safe Boot is set to "Audit" or "Setup" mode. If you don't have those options, you should see another option to delete all keys. In my Dell Intel Laptop, I also needed to enable "Trusted Execution", "Intel Virtualization Technology", and "VT for Direct I/O" to get everything working.

Once you have that configured correctly and you have the same command output (or something similar) as shown above we can move on...

➜ sudo sbctl create-keys
Created Owner UUID a9fbbdb7-a05f-48d5-b63a-08c5df45ee70
Creating secure boot keys...✔
Secure boot keys created!
➜ sudo sbctl enroll-keys
Enrolling keys to EFI variables...✔
Enrolled keys to the EFI variables!

Or If you had to clear out your keys in the BIOS settings:

➜ sudo sbctl enroll-keys --microsoft
Enrolling keys to EFI variables...✔
Enrolled keys to the EFI variables!
➜ sudo sbctl verify
Verifying file database and EFI images in /boot/efi...
✗ /boot/vmlinuz-linux is not signed
✗ /boot/vmlinuz-linux-vfio is not signed
✗ /boot/efi/EFI/zbm/zfsbootmenu.EFI is not signed
✗ /boot/efi/EFI/arch/fwupdx64.efi is not signed

Sign all the EFI files and any additional kernels you might have installed

➜ sudo sbctl sign -s /boot/efi/EFI/zbm/zfsbootmenu.EFI
➜ sudo sbctl sign -s /boot/efi/EFI/arch/fwupdx64.efi
➜ sudo sbctl sign -s /boot/vmlinuz-linux
➜ sudo sbctl sign -s /boot/vmlinuz-linux-vfio
➜ sudo sbctl verify
Verifying file database and EFI images in /boot/efi...
✓ /boot/vmlinuz-linux is signed
✓ /boot/vmlinuz-linux-vfio is signed
✓ /boot/efi/EFI/zbm/zfsbootmenu.EFI is signed
✓ /boot/efi/EFI/arch/fwupdx64.efi is signed
➜ sudo sbctl list-files
/boot/efi/EFI/arch/fwupdx64.efi
Signed:         ✓ Signed

/boot/efi/EFI/zbm/zfsbootmenu.EFI
Signed:         ✓ Signed

/boot/vmlinuz-linux
Signed:         ✓ Signed

/boot/vmlinuz-linux-vfio
Signed:         ✓ Signed

Reboot and turn Secure Boot on or from Audit \ Setup to "Deploy" mode (if available) in the BIOS. If your system boots up successfully you should be good to go. You can run this command again to verify Secure Boot is fully enabled.

➜ sbctl status
Installed:      ✓ sbctl is installed
Owner GUID:     bd4a58ba-6a8d-4ee3-866e-1e9f1eb7e690
Setup Mode:     ✓ Disabled
Secure Boot:    ✓ Enabled
Vendor Keys:    none

Whenever you update the kernel or install a new kernel, a hook should automatically sign the boot images & EFI files.

If you're having issues getting Safe Boot fully working then scroll down to the References & Software section and take a look at "Setting up Arch + LUKS + BTRFS + systemd-boot + apparmor + Secure Boot + TPM 2.0" link for further troubleshooting.

Post installation Tips

If you made it this far into this guide and you're able to boot into a fully operational rootzfs system then I commend you - as that should be a testament to your focus and tenacity. If the system did not boot up correctly, (I still commend you!) but don't despair - just retrace your steps carefully and look into the troubleshooting section at the bottom of this guide for any clues.

Setting up a Swap solution using zram
I took these instructions straight from the Arch Linux wiki here (https://wiki.archlinux.org/title/Zram). We'll be going with the udev rule to keep everything straight forward and persistent.

Verify you're able to load the zram module with modprobe:

➜ sudo modprobe zram
➜ lsmod | grep zram
zram                   61440  0
842_decompress         16384  1 zram
842_compress           24576  1 zram
lz4hc_compress         20480  1 zram
lz4_compress           24576  1 zram

Load the zram module at boot:

➜ sudo nano /etc/modules-load.d/zram.conf
zram

Create a udev rule and decide how much ram zram can use:

➜ nano /etc/udev/rules.d/99-zram.rules
ACTION=="add", KERNEL=="zram0", ATTR{initstate}=="0", ATTR{comp_algorithm}="zstd", ATTR{disksize}="2G", TAG+="systemd"

Add /dev/zram to your fstab with a higher than default priority and the x-systemd.makefs option:

➜ sudo nano /etc/fstab
/dev/zram0 none swap defaults,discard,pri=100,x-systemd.makefs 0 0

Optimize swap on ram settings:

➜ sudo nano /etc/sysctl.d/99-vm-zram-parameters.conf
vm.swappiness = 180
vm.watermark_boost_factor = 0
vm.watermark_scale_factor = 125
vm.page-cluster = 0

Reboot the system and verify everything works:

➜ lsmod | grep zram
zram                   61440  0
842_decompress         16384  1 zram
842_compress           24576  1 zram
lz4hc_compress         20480  1 zram
lz4_compress           24576  1 zram

➜ swapon
NAME       TYPE      SIZE USED PRIO
/dev/zram0 partition   2G   0B  100

➜ zramctl
NAME       ALGORITHM DISKSIZE DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram0 zstd            2G   4K   59B    4K       8 [SWAP]

Create a backup of the external boot device
As mentioned earlier in this guide, its a good idea to backup your SD Card or any removeable storage device housing our EFI files.

Identify the boot mount point and temporarily unmount it:

➜ mount | grep boot
/dev/mmcblk0p1 on /boot/efi type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,utf8,errors=remount-ro)
➜ sudo umount /boot/efi

Warning: While /boot/efi is unmounted, do not attempt any software updates as it could break the OS!
Use dd and bzip2 to image the data into a compressed file in one go:

➜ sudo dd if=/dev/mmcblk0 | bzip2 -c > ~/sdcard-backup-10-13-25.img.bz2
62333952+0 records in
62333952+0 records out
31914983424 bytes (32 GB, 30 GiB) copied, 461.963 s, 69.1 MB/s
sudo dd if=/dev/mmcblk0  6.71s user 38.70s system 9% cpu 7:47.53 total
bzip2 -c > ~/sdcard-backup-10-13-25.img.bz2  267.58s user 4.18s system 58% cpu 7:47.57 total

Check the compressed file for any issues:

➜ file ~/sdcard-backup-10-13-25.img.bz2
sdcard-backup-10-13-25.img.bz2: bzip2 compressed data, block size = 900k
➜ bzip2 -t ~/sdcard-backup-10-13-25.img.bz2
bzip2 -t sdcard-backup-10-13-25.img.bz2  71.23s user 0.05s system 99% cpu 1:11.28 total

Operation complete, let's mount our device back like it was before:

➜ sudo mount /dev/mmcblk0p1 /boot/efi
➜ mount | grep boot
/dev/mmcblk0p1 on /boot/efi type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,utf8,errors=remount-ro)

Use this command to restore the backup file onto the same (or a new) storage device:

➜ sudo bzip2 -dc ~/sdcard-backup-10-13-25.img.bz2 | dd of=/dev/mmcblk0

Auto snaphots with every Pacman transaction
One of the main features of the ZFS filesystem is its ability to take file system snapshots. This feature is quite invalueable especially during major Operating System upgrades and software updates. ZFSBootMenu conveniently takes this to another level by showing every snapshot taken and gives you an option to easily choose which snapshot to impliment at boot time.
However, initiating a manual ZFS snapshot everytime something changes on the system can be very tedious. Using a Pacman hook is a very effective way to deal with this situation.

Install the hook:

➜ paru -S pacman-zfs-hook

The hook calls on the "zfs-snap-pac" script to create a snapshot whenever there is a pacman transaction. You can customize this script to your liking. For example we'll modify the date command within the GetSnapshotName() function so that it names snapshots in a more meaningful manner. After installing the script from AUR, edit the hook file like so:

➜ sudo nano /usr/share/libalpm/scripts/zfs-snap-pac
...
GetSnapshotName() {
    local time=$(date +%s)
    snapshotName="$snapshotName$time"
}
...

Example output of default snapshot naming scheme:

➜ date +%s
1708100748

New human friendly naming scheme:

➜ date +%F-%R
2024-02-17-02:25

Auto prune old snapshots
Creating snapshots everytime there's a Pacman transaction can quickly take up valuable space if old snapshots are never deleted. Use another hook to automate this task and avoid potential storage issues.

Install the hook:

➜ paru -S zfs-prune-snapshots

Use the hook file below to automatically call the zfs-prune-snapshots script after every Pacman transaction has completed. The arguement "2w" will instruct the script to delete any ZFS snapshot matching 2 weeks or older.

➜ sudo nano /usr/share/libalpm/hooks/01-zfs-prune-pac.hook
[Trigger]
Operation = Upgrade
Operation = Remove
Type = Package
Target = *

[Action]
Description = Pruning ZFS snapshots...
When = PostTransaction
Exec = /usr/bin/zfs-prune-snapshots 2w

For recovery situations with a broken pacman and outdated glibc
This is an issue I've seen when you don't update Arch Linux for a long time (months, years, etc.). To fix this kind of issue it's important you perfom this step now (before pacman breaks and you're sol).

Install pacman-static from AUR (has dependencies compiled into one binary):

➜ paru -S pacman-static

Create a Netboot Arch Linux \ Online Recovery System
The idea behind this recovery system is to have access to an "Always Working" and "Up to Date" Arch Linux system, where we can easily implement recovery efforts anytime. This perfect recovery system needs to be versatile and not configuration dependant with the only requirement being is a wired internet connection.

First you'll need to download a copy of the Arch Linux ipxe x86_64 UEFI executable:

➜ wget https://archlinux.org/static/netboot/ipxe-arch.efi

Sign and copy it to the boot \ efi partition:

➜ sudo mkdir /boot/efi/EFI/netboot
➜ sudo cp -v ./ipxe-arch.efi /boot/efi/EFI/netboot
'./ipxe-arch.efi' -> '/boot/efi/EFI/netboot/ipxe-arch.efi'
➜ sudo sbctl sign -s /boot/efi/EFI/netboot/ipxe-arch.efi
✓ Signed /boot/efi/EFI/netboot/ipxe-arch.efi

Now you should be able to Netboot (aka PXE boot) Arch Linux Installer from your EFI shell. If your BIOS supports it, look in the boot settings and add a boot entry containing the path to ipxe-arch.efi. That way when you boot your computer, press the appropiate F key to access your BIOS's boot menu and take it from there.

Launching from your EFI shell:

# Enter your EFI partition FS0 or FS1
FS1:
cd EFI\netboot
# Start the efi file
ipxe.efi

Note: Before Netbooting Arch Linux, be sure to temporarily disable Secure Boot or set to "Audit" mode in your BIOS settings. Not doing so will cause the following error:

Could not select: Exec Format error (http://ipxe.org/2e008081)

Once you've booted into the Arch Linux Netboot Menu, select "Release:" and choose a release date that's about a month or two old. Choosing a date that is too soon will not work because the zfs modules haven't been built yet for that kernel version.

After that, select "Choose a mirror" and find the closest Country \ Server to your location.

Finally, select "Boot Arch Linux", to let it fetch the Linux Image (with release date you selected) boot.

Once Arch Linux is booted up and you're in the shell, enter the following command to start building the zfs kernel modules:

➜ curl -s https://raw.githubusercontent.com/eoli3n/archiso-zfs/master/init | bash

The script will start building the modules and may spit out the following error in which case you can safely ignore.

>Install zfs-dkms
error: command failed to execute correctly

However, if you get this error you unfortunately will have to try booting again and selecting an older release date.

>Install zfs-dkms
modprobe: FATAL: Module zfs not found in directory /lib/modules/...

Run the following command to verify the zfs modules are loaded:

➜ lsmod | grep zfs
zfs                  6602751  0
spl                   159744  1 zfs

If the above command checks out, continue down and see section System Rescue \ Troubleshooting to proceed.

Possible issues, workarounds, fixes

To fix the "WARNING: Possible missing firmware" messages everytime initramfs is regenerated:

➜ paru -S mkinitcpio-firmware

how to fix missing libcrypto.so.1.1?

/var stays busy at shutdown due to journald #867

Arch Linux, Aur error - FAILED unknown public key

zfs-dkms depends on a specific version of the zfs-utils, and zfs-utils depend on a specific version of zfs-dkms, which completely prevents me from updating them

Use this fix script by ghost:

#!/bin/zsh

paru -Sy

g='/Version/{print $3}'
d1=$(paru -Qi zfs-dkms | gawk "$g")
d2=$(paru -Si zfs-dkms | gawk "$g")
u1=$(paru -Qi zfs-utils | gawk "$g")
u2=$(paru -Si zfs-utils | gawk "$g")

if [[ $d1 == $d2 || $u1 == $u2 ]]; then
	echo "zfs is up to date"
	exit 0
fi

paru -Sy zfs-dkms zfs-utils \
 --assume-installed zfs-dkms=$d1 --assume-installed zfs-dkms=$d2 \
 --assume-installed zfs-utils=$u1 --assume-installed zfs-utils=$u2

System Rescue \ Troubleshooting

Mounting ZFS using live boot media
If the system is broken and you need to perform troubleshooting tasks to recover, it's not difficult to mount zroot using live media (Especially, if you've followed the steps above and created a Arch Netboot \ Live Recovery Eniroment). Follow the commands below and substitute your zfs pool names \ partitions where needed.

➜ lsblk
➜ zpool import -f -N -R /mnt zroot
➜ zfs list
➜ zfs mount zroot/ROOT/arch_mpv-libplacebo2_NEW
➜ ls /mnt
➜ lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sr0          11:0    1 883.3M  0 rom
mmcblk0     179:0    0  29.7G  0 disk
└─mmcblk0p1 179:1    0   512M  0 part /boot/efi
zram0       253:0    0     2G  0 disk [SWAP]
nvme0n1     259:0    0 238.5G  0 disk
└─nvme0n1p1 259:1    0 238.5G  0 part

➜ mount /dev/mmcblk0p1 /mnt/boot/efi
➜ zfs mount zroot/home
➜ mount | grep mnt
➜ arch-chroot /mnt

If you're mounting using a different distro eg: Alpine Linux:

➜ zpool import -f zroot
➜ mount -t zfs zroot/ROOT/alpine /mnt
➜ mount -t zfs zroot/home /mnt/home
➜ mount /dev/vda1 /mnt/boot/efi
➜ mount | grep mnt
➜ chroot /mnt /usr/bin/env sh

Reset the zfs pool and umount

➜ zpool export -f zroot
➜ zfs umount -a

Sources & Software

During the writing of this lengthy Setup Guide, I've spent hours in referencing quite a number of documents and online sources. Without these sources and the people who dedicated their time to create and share information, my work into writing this Guide would not have been possible. I will list all the sources I found valuable into writing this guide not to only give credit but also in hopes that you may find these sources useful as well.

2022: Arch Linux Root on ZFS from Scratch Tutorial

Guide: Install Arch Linux on an encrypted zpool with ZFSBootMenu as a bootloader

Debian Bullseye installation with ESP on the zpool disk

Setting up Arch + LUKS + BTRFS + systemd-boot + apparmor + Secure Boot + TPM 2.0 - A long, nightmarish journey, now simplified

sbctl: Key creation and enrollment

Configure systemd ZFS mounts

The Archzfs unofficial user repository offers multiple ways to install the ZFS kernel module.

Arch Linux pacman hooks

Paru: Feature packed AUR helper

The Perfect Recovery System

Netboot

Linux dd Command – 18 Examples with All Options

Similarities and differences between gzip and bzip2

Compress command output by piping to bzip2

A quick-start guide to OpenZFS native encryption

Programster's Blog - ZFS - Create Disk Pools

Thank you, please comment

Feel free to leave any helpful comments or suggestions.

@JkktBkkt
Copy link

JkktBkkt commented Dec 7, 2024

You could also takeover secure boot by deleting keys (including PK) from within UEFI, creating and enrolling keys via sbctl, adding hooks to it to sign kernel, add hooks to dkms to sign the module using your sbctl-managed db.key.. Oh and use hook zbm-sign.pl from zbm github repo to sign zbm as well, in which case you'd have both secure boot enabled, and this whole setup.
Or, y'know, at least edit the title to reflect that secure boot is not enabled.

@jimboy-701
Copy link
Author

@JkktBkkt This guide has been updated and now includes Safe Boot -Finally,

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment