Skip to content

Instantly share code, notes, and snippets.

@MRi-LE
Last active March 10, 2026 22:17
Show Gist options
  • Select an option

  • Save MRi-LE/861a8d4f6850abaed8b89c3ea091d608 to your computer and use it in GitHub Desktop.

Select an option

Save MRi-LE/861a8d4f6850abaed8b89c3ea091d608 to your computer and use it in GitHub Desktop.
How to install TrueNAS SCALE 25.10 on a partition instead of the full disk + mirror boot and data partition at a later stage

The TrueNAS installer doesn’t have a way to use anything less than the full device. This is a waste of capacity when installing to large NVMe which is usually several hundred of GB or even TB. TrueNAS SCALE will use only a few GB for its system files so installing to a 16GB partition is sufficient.

This guide covers for TrueNas Scale 24.04 to 25.10

You can to modify the installer script before starting the installation process.

  1. Boot TrueNAS Scale installer from USB/ISO
  2. Select shell in the first menu (instead of installing)
  3. While in the shell, find and open the installer and edit using vi

For TrueNasScale before 24.10 - https://github.com/truenas/truenas-installer/blob/release/24.04.2.5/usr/sbin/truenas-install#L460 vi /usr/sbin/truenas-install

For TrueNasScale 24.10+ - https://github.com/truenas/truenas-installer/blob/76a188e9048f26bca1a88f5d46a552742b3db286/truenas_installer/install.py#L81 vi /usr/lib/python3/dist-packages/truenas_installer/install.py

  1. in vi you can use "set number" to easily see on what line of code you are
  2. make the changes on line 81 as follows: await run(["sgdisk", "-n3:0:+16GiB", "-t3:BF01", disk.device])
  3. exit vi and type exit to return from the shell
  4. Select Install/Upgrade from the Console Setup menu (without rebooting, first) and install to NVMe drive.
  5. Remove the USB and reboot.

Next we re-partition the nvme. You can perform this next setp on console or enable ssh via TrueNas Gui

  1. Login to the linux shell.
  2. Verify the created partition and their alignment
truenas_admin@truenas[~]$ sudo fdisk -l /dev/nvme0n1
Disk /dev/nvme0n1: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: CT1000P3PSSD8
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 67X88FDC-3A4E-43FC-BA2F-AD279703FBFF

Device           Start      End  Sectors  Size Type
/dev/nvme0n1p1    4096     6143     2048    1M BIOS boot
/dev/nvme0n1p2    6144  1054719  1048576  512M EFI System
/dev/nvme0n1p3 1054720 34609151 33554432   16G Solaris /usr & Apple ZFS


truenas_admin@truenas[~]$ for p in 1 2 3; do sudo parted /dev/nvme0n1 align-check optimal $p; done
1 aligned
2 aligned
3 aligned
  1. create a partition allocating the rest of the disk.
truenas_admin@truenas[~]$ sudo parted
(parted) name 3 boot-pool
(parted) unit KiB
(parted) print
(parted) mkpart ssd-pool 17304576kiB 100%
(parted) print
Model: CT1000P3PSSD8 (nvme)
Disk /dev/nvme0n1: 976762584kiB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start        End           Size          File system  Name       Flags
 1      2048kiB      3072kiB       1024kiB                               bios_grub, legacy_boot
 2      3072kiB      527360kiB     524288kiB     fat32                   boot, esp
 3      527360kiB    17304576kiB   16777216kiB   zfs          boot-pool
 4      17304576kiB  976761856kiB  959457280kiB               ssd-pool

truenas_admin@truenas[~]$ for p in 1 2 3 4; do sudo parted /dev/nvme0n1 align-check optimal $p; done
1 aligned
2 aligned
3 aligned
4 aligned
  1. setup the zpool for the UI to pick it up

sudo zpool create ssd-pool /dev/nvme0n1p4 sudo zpool export ssd-pool

NOTE: you can ignore the error message "cannot mount '/ssd-pool': failed to create mountpoint: Read-only file system"

  1. use the TrueNas Gui -> Storage Dashboard -> Import Pool

(Re-) Creating a mirrored Boot and Data Pool:

IMPORTANT: zpool replace does not recreate or copy the EFI boot partition!

This need to be done manually!!!

Step 1 — Completely wipe new drive/replacement drive/formerly used drive [eg.: nvme1n1]

This guarantees no stale metadata. (GPT + ZFS labels)

`

sudo zpool labelclear -f /dev/nvme1n1
sudo wipefs -a /dev/nvme1n1
sudo sgdisk --zap-all /dev/nvme1n1

` Verify:

lsblk -f /dev/nvme1n1 No filesystems, no partitions.

Step 2 — Clone the partition table

We do not use parted here! We use sgdisk, which is what TrueNAS relies on internally.

sudo sgdisk -R=<newDrive> <existingDrive> sudo sgdisk -G /dev/nvme1n1

This:

Copies GPT header Copies all partition offsets Preserves 1 MiB alignment Makes layouts identical In addition, regenerate unique disk GUIDs (required)

truenas_admin@truenas[~]$ sudo sgdisk -R=/dev/nvme1n1 /dev/nvme0n1
The operation has completed successfully.
truenas_admin@truenas[~]$ sudo sgdisk -G /dev/nvme1n1
The operation has completed successfully.

truenas_admin@truenas[~]$ lsblk -f
NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINTS
nvme1n1
├─nvme1n1p1
├─nvme1n1p2 vfat FAT32 EFI 4E95-56BB
├─nvme1n1p3 zfs_member 5000 boot-pool 5995343355652310543
└─nvme1n1p4 zfs_member 5000 ssd-pool 8868205190042146147
nvme0n1
├─nvme0n1p1
├─nvme0n1p2 vfat FAT32 EFI 6165-92D6
├─nvme0n1p3 zfs_member 5000 boot-pool 3904092570291516174
└─nvme0n1p4 zfs_member 5000 ssd-pool 7548575174596316953
Step 3 — Verify layouts match exactly

sudo sgdisk -p /dev/nvme0n1 sudo sgdisk -p /dev/nvme1n1

The start/end sectors must match for:

p1 (BIOS_GRUB) p2 (EFI) p3 (boot-pool slice) p4 (data slice)

Also check:

lsblk -o NAME,START,SIZE /dev/nvme0n1 /dev/nvme1n1

They should now be identical.

Step 4 — Attach nvme1n1p3 to the boot pool (CLI)

Lets add the new drives to the pool and create a mirror:

sudo zpool attach <pool-name> <existingDrive> <newDrive>

This creates:

truenas_admin@truenas[~]$ sudo zpool attach boot-pool nvme0n1p3 nvme1n1p3
truenas_admin@truenas[~]$ zpool status boot-pool
  pool: boot-pool
 state: ONLINE
  scan: resilvered 3.23G in 00:00:05 with 0 errors on Wed Mar  4 06:01:53 2026
config:

        NAME           STATE     READ WRITE CKSUM
        boot-pool      ONLINE       0     0     0
          mirror-0     ONLINE       0     0     0
            nvme0n1p3  ONLINE       0     0     0
            nvme1n1p3  ONLINE       0     0     0

Optional mirror the Data/Application Pool:

truenas_admin@truenas[~]$ sudo zpool attach ssd-pool nvme0n1p4 nvme1n1p4
truenas_admin@truenas[~]$ zpool status ssd-pool
  pool: ssd-pool
 state: ONLINE
  scan: resilvered 2.21M in 00:00:00 with 0 errors on Thu Mar  5 06:14:42 2026
config:

        NAME           STATE     READ WRITE CKSUM
        ssd-pool       ONLINE       0     0     0
          mirror-0     ONLINE       0     0     0
            nvme0n1p4  ONLINE       0     0     0
            nvme1n1p4  ONLINE       0     0     0

Recommended dataset layout for apps sudo zfs set compression=zstd ssd-pool sudo zfs set atime=off ssd-pool

Step 5 — (IMPORTANT) Install EFI bootloader on nvme1n1

ZFS mirror alone is not enough — EFI must exist.

Confirm what /boot/efi really is right now

Run:

truenas_admin@truenas[~]$ sudo findmnt /boot/efi || true
truenas_admin@truenas[~]$ ls -ld /boot/efi
drwxr-xr-x 2 root root 2 Mar  3 23:58 /boot/efi
truenas_admin@truenas[~]$ sudo blkid /dev/nvme0n1p2 /dev/nvme1n1p2
/dev/nvme0n1p2: LABEL_FATBOOT="EFI" LABEL="EFI" UUID="6165-92D6" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="24657b7f-2cfa-44dc-842a-498c73e48478"
/dev/nvme1n1p2: LABEL_FATBOOT="EFI" LABEL="EFI" UUID="4E95-56BB" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="6ebe1b30-0ec9-47a6-82a9-a2cd11a2d2ef"
  • If findmnt shows nothing → /boot/efi is not mounted (most likely).
  • blkid should show TYPE="vfat" for the ESP partitions.

Instead of running grub-install, do what appliance systems commonly do:

Mount ESP on nvme0 Mount ESP on nvme1 Copy the EFI files Optionally add a BIOS/UEFI boot entry for the new drive

A. Mount both ESPs
truenas_admin@truenas[~]$ sudo mkdir -p /mnt/esp0 /mnt/esp1
truenas_admin@truenas[~]$ sudo mount -t vfat /dev/nvme0n1p2 /mnt/esp0
truenas_admin@truenas[~]$ sudo mount -t vfat /dev/nvme1n1p2 /mnt/esp1

NOTE: If mounting /dev/nvme1n1p2 fails or it’s not vfat, reformat it (this erases only the esp partion on nvme1):

sudo mkfs.vfat -F32 -n EFI /dev/nvme1n1p2 sudo mount -t vfat /dev/nvme1n1p2 /mnt/esp1

B. Copy EFI contents from disk 0 to disk 1

sudo rsync -aH --delete /mnt/esp0/ /mnt/esp1/ sync

C. Make sure the “removable media” fallback exists (helps a lot)

Many firmwares will boot \EFI\BOOT\BOOTX64.EFI automatically if no NVRAM entry exists.

Check what TrueNAS placed on the ESP:

truenas_admin@truenas[~]$ sudo find /mnt/esp0/EFI -maxdepth 2 -type f -iname '*.efi' -print
/mnt/esp1/EFI/boot/bootx64.efi
/mnt/esp1/EFI/debian/fbx64.efi
/mnt/esp1/EFI/debian/grubx64.efi
/mnt/esp1/EFI/debian/mmx64.efi
/mnt/esp1/EFI/debian/shimx64.efi

If you see something like /mnt/esp0/EFI/debian/grubx64.efi (example), create the fallback:

truenas_admin@truenas[~]$ sudo mkdir -p /mnt/esp1/EFI/BOOT
truenas_admin@truenas[~]$ sudo cp -f /mnt/esp0/EFI/*/grubx64.efi /mnt/esp1/EFI/BOOT/BOOTX64.EFI
sync

(That EFI/*/grubx64.efi glob is deliberate so it works even if the vendor dir isn’t exactly truenas.)

D. Add an NVRAM boot entry for newDrive (Optional but recommended)

First, see existing entries:

truenas_admin@truenas[~]$ sudo efibootmgr -v
BootCurrent: 0002
Timeout: 1 seconds
BootOrder: 0002,0000,0001,0003
Boot0000  Windows Boot Manager  VenHw(99e275e7-75a0-4b37-a2e6-c5385e6c00cb)WINDOWS.........x...B.C.D.O.B.J.E.C.T.=.{.9.d.e.a.8.6.2.c.-.5.c.d.d.-.4.e.7.0.-.a.c.c.1.-.f.3.2.b.3.4.4.d.4.7.9.5.}...................
Boot0001  debian        VenHw(99e275e7-75a0-4b37-a2e6-c5385e6c00cb)
Boot0002* debian        HD(2,GPT,24657b7f-2cfa-44dc-842a-498c73e48478,0x1800,0x100000)/File(\EFI\debian\shimx64.efi)
Boot0003  TrueNAS (nvme1)       VenHw(99e275e7-75a0-4b37-a2e6-c5385e6c00cb)

Then create a new one pointing to the copied loader (adjust the path to whatever you actually have on the ESP; use the find output you ran above):

truenas_admin@truenas[~]$ sudo efibootmgr -c -d /dev/nvme0n1 -p 2 -L "TrueNAS (nvme1)" -l '\EFI\truenas\grubx64.efi'
BootCurrent: 0002
Timeout: 1 seconds
BootOrder: 0004,0002,0000,0001,0003
Boot0000  Windows Boot Manager
Boot0001  debian
Boot0002* debian
Boot0003  TrueNAS (nvme1)
Boot0004* TrueNAS (nvme1)
E. Unmount

sudo umount /mnt/esp0 /mnt/esp1

F. Validate EFI Bootmanager
truenas_admin@truenas[~]$ sudo efibootmgr -v
BootCurrent: 0003
Timeout: 1 seconds
BootOrder: 0006,0005,0003,0000,0001,0002,0004
Boot0000  Windows Boot Manager  VenHw(99e275e7-75a0-4b37-a2e6-c5385e6c00cb)WINDOWS.........x...B.C.D.O.B.J.E.C.T.=.{.9.d.e.a.8.6.2.c.-.5.c.d.d.-.4.e.7.0.-.a.c.c.1.-.f.3.2.b.3.4.4.d.4.7.9.5.}...................
Boot0001  debian        VenHw(99e275e7-75a0-4b37-a2e6-c5385e6c00cb)
Boot0002  debian        VenHw(99e275e7-75a0-4b37-a2e6-c5385e6c00cb)
Boot0003* debian        HD(2,GPT,51003091-0620-4c18-868c-ea9dba1f564e,0x1800,0x100000)/File(\EFI\debian\shimx64.efi)
Boot0004  TrueNAS (nvme1)       VenHw(99e275e7-75a0-4b37-a2e6-c5385e6c00cb)
Boot0005* TrueNAS (nvme1)       HD(2,GPT,ff4dba6e-93b9-45ea-b546-33c28c63f612,0x1800,0x100000)/File(\EFI\truenas\grubx64.efi)
Boot0006* TrueNAS (nvme1)       HD(2,GPT,51003091-0620-4c18-868c-ea9dba1f564e,0x1800,0x100000)/File(\EFI\truenas\grubx64.efi)

NOTE: as this is my 3rd iteration, Boot0004 is not a proper file path (it shows VenHw) meaning it's probably stale, so removed it.

truenas_admin@truenas[~]$ sudo efibootmgr -b 0003 -B
BootCurrent: 0002
Timeout: 1 seconds
BootOrder: 0004,0002,0000,0001
Boot0000  Windows Boot Manager
Boot0001  debian
Boot0002* debian
Boot0004* TrueNAS (nvme1)
G. Now you can test by removing the a Drive from the Mirror and reboot the system

Drive goes into degraded state after pulling Drive2

truenas_admin@truenas[~]$ sudo zpool status
  pool: boot-pool
 state: DEGRADED
 config:

        NAME                      STATE     READ WRITE CKSUM
        boot-pool                 DEGRADED     0     0     0
          mirror-0                DEGRADED     0     0     0
            15318324978903853925  FAULTED      0     0     0  was /dev/nvme0n1p3
            nvme0n1p3             ONLINE       0     0     0


  pool: ssd-pool
 state: DEGRADED
 config:

        NAME                     STATE     READ WRITE CKSUM
        ssd-pool                 DEGRADED     0     0     0
          mirror-0               DEGRADED     0     0     0
            2513775003968519971  FAULTED      0     0     0  was /dev/nvme0n1p4
            nvme0n1p4            ONLINE       0     0     0

ZFS is switching on how it displays a vdev when the normal device path is missing, so you'll a long number instead of nvme1n1p3

Those long numbers like [2513775003968519971] are ZFS vdev GUIDs.

ZFS stores devices internally by GUID. When the OS device node (like /dev/nvme1n1p3) is missing/unavailable, ZFS can’t resolve it to a path, so it shows the GUID instead.

You would simply de-tach the FAULTY Drives

truenas_admin@truenas[~]$ sudo zpool detach ssd-pool 2513775003968519971
truenas_admin@truenas[~]$ sudo zpool detach boot-pool 15318324978903853925
truenas_admin@truenas[~]$ zpool status
  pool: boot-pool
 state: ONLINE
  scan: resilvered 3.23G in 00:00:06 with 0 errors on Thu Mar  5 06:11:03 2026
config:

        NAME         STATE     READ WRITE CKSUM
        boot-pool    ONLINE       0     0     0
          nvme0n1p3  ONLINE       0     0     0

errors: No known data errors

  pool: ssd-pool
 state: ONLINE
  scan: resilvered 2.21M in 00:00:00 with 0 errors on Thu Mar  5 06:14:42 2026
config:

        NAME         STATE     READ WRITE CKSUM
        ssd-pool     ONLINE       0     0     0
          nvme0n1p4  ONLINE       0     0     0

NOTE: Make sure you identify the device names correctly as they might change, so check your zpool before!

The correct way to reference disks when running manual ZFS commands you should never rely on /dev/nvmeX.

Instead use stable paths: /dev/disk/by-id/

Check them: ls -l /dev/disk/by-id/ | grep -i nvme ls -l /dev/disk/by-id/ | grep -E 'part3|part4'

After the drive-replacement, start from top again by (Re-) Creating a mirrored Boot and Data Pool

@MRi-LE
Copy link
Author

MRi-LE commented Mar 4, 2026

@7rdamian
Copy link

7rdamian commented Mar 4, 2026

thank you so much! i was stuck on this for hours and nothing was working.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment