Skip to content

Instantly share code, notes, and snippets.

@gdamjan
Last active November 20, 2025 20:25
Show Gist options
  • Select an option

  • Save gdamjan/b8d71c06d83448552ae9a8513111f2a4 to your computer and use it in GitHub Desktop.

Select an option

Save gdamjan/b8d71c06d83448552ae9a8513111f2a4 to your computer and use it in GitHub Desktop.
Migrate HDDs to NVMEs

Migrate Btrfs and LVM from HDD to NVME

Old state:

  • 2x HDD, 2TB each (sda, sdb) - TOSHIBA P300 (HDWD120)
  • both have the same partitions: 1MB BIOS boot, 256MB EFI System, 1TB btrfs, 838GB LVM
  • btrfs (sda3, sdb3) is set to mirror (btrfs raid1), mounted on /var/lib (and some other places)
  • LVM (sda4, sdb4) has a thin-pool, mirrored (dm raid1), and different LVs with no specific configuration, used as disks for libvirt VMs. Volume groups is called vg.
❯ sudo findmnt /var/lib/
TARGET           SOURCE               FSTYPE OPTIONS
/var/lib         /dev/sdb3[/@lib]     btrfs  rw,relatime,space_cache,subvolid=730,subvol=/@lib
❯ sudo lvs vg
  LV            VG Attr       LSize   Pool      Origin Data%  Meta%  Move Log Cpy%Sync Convert
  arch          vg Vwi-a-t---  26.00g thin_pool        99.68                                  
  coreos        vg Vwi-a-t---  10.00g thin_pool        100.00                                 
  debian        vg Vwi-a-t---  20.00g thin_pool        100.00                                 
  fedora        vg Vwi-a-t---  20.00g thin_pool        100.00                                 
  nix-vm        vg Vwi-a-t---  45.00g thin_pool        87.84                                  
  openbsd       vg Vwi-a-t---  10.00g thin_pool        34.73                                  
  swap          vg Vwi-aot---  16.00g thin_pool        100.00                                 
  test_bcachefs vg Vwi-a-t---  20.00g thin_pool        35.48                                  
  thin_pool     vg twi-aot--- 800.00g                  27.98  21.41                           

Note

The "1MB BIOS boot" and "256MB EFI System" partitions were not used, disks were partitioned like that just in case

New state:

  • 2x NVME, 2TB each (nvme1n1, nvme2n1) - Samsung 990 EVO Plus
  • move btrfs from sda3/sdb3 to the NVMEs (nvme1n1p3, nvme2n1p3)
  • move lvm thin-pool from sda4/sdb4 to the NVMEs (nvme1n1p4, nvme2n1p4)
  • HDDs should become offline

Steps:

Use sfdisk to create the exact same partitions layout on the NVMEs as were on the HDDs

sudo sfdisk /dev/sdb -d | sudo fdisk /dev/nvme1n1
sudo sfdisk /dev/nvme1n1 -d | sudo fdisk /dev/nvme2n1

BTRFS

Move the btrfs volumes, one by one. First sda3:

sudo btrfs replace start /dev/sda3 /dev/nvme1n1p3  /var/lib/
sudo btrfs replace status /var/lib/

… this takes a while, about 1h30m in my case …

Then sdb3:

sudo btrfs replace start /dev/sdb3 /dev/nvme2n1p3  /var/lib/
sudo btrfs replace status /var/lib/

After the replace, sda3 and sdb3 are no longer in use. Confirm with:

sudo btrfs device usage /var/lib

LVM

Move from HDD to NVME

sudo pvcreate /dev/nvme1n1p4 /dev/nvme2n1p4
sudo vgextend vg /dev/nvme1n1p4 /dev/nvme2n1p4
sudo vgs -a -o +devices

sudo pvmove /dev/sda4 /dev/nvme1n1p4
sudo pvmove /dev/sdb4 /dev/nvme2n1p4
sudo lvs -a -o +devices

Remove HDDs:

# check status
sudo pvs -o+pv_used
sudo lvs -a -o+seg_monitor,raid_sync_action,devices

sudo vgreduce vg /dev/sda4 /dev/sdb4
sudo pvremove /dev/sda4 /dev/sdb4
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment