Skip to content

Instantly share code, notes, and snippets.

@Szpadel
Last active November 6, 2025 15:28
Show Gist options
  • Select an option

  • Save Szpadel/9a1960e52121e798a240a9b320ec13c8 to your computer and use it in GitHub Desktop.

Select an option

Save Szpadel/9a1960e52121e798a240a9b320ec13c8 to your computer and use it in GitHub Desktop.

This is instruction how to run multi level compression for zram swap

The idea is that we want to be able to quickly swap memory away when we need memory, but cold pages should be compressed further. This instruction also includes traditional block storage for uncompressible pages that would not decrease otherwise any memory usage. Zstd have nice property that decompression speed is basically independend from compression level, so using higher levels will only participate in compression speed. Using zstd for default compresor causes some ui lags when memory need to be paged out, therefore is okay, but not great solution. Using zstd for recompression in the background have very little effect assuming you do not use every cpu cycle, and compressing old pages make it failry safe that they will not be needed anytime soon. This might consume 1 cpu core during recompression (accounted for recompress-idle-ram script) but otherwise allows best from both worlds, fast memory reclaim, fast swap read and very good compression (usually 4-5x reduction).

Best way to initialize zram swap device on boot is using zram-generator, but because of a bug it is unable to initialize multiple compression levels. You will need to compile it from source with includion of following patch: systemd/zram-generator#237

/etc/systemd/zram-generator.conf

[zram0]
zram-size = ram # You might even use `ram *2` depending on your needs
compression-algorithm = lzo-rle zstd(level=22) # lzo-rle is very fast compressor, zstd(level=22) is best possible compression but it very slow, level=22 might be overkill anyway, 15 or even 10 might be also fine
writeback-device=/dev/disk/by-id/nvme-XXXXXXX-part5 # this needs to be raw partition, file is not allowed, you might skip this line if you do not want to offload incompressible data to disk

/usr/local/libexec/recompress-idle-zram

#!/usr/bin/env bash
set -eou pipefail

main() {
  ZRAM_DEVICE=zram0 # Must match zram device from zram-generator.conf
  IDLE_TIME=3600 # 1h, how old page is considered idle (cold) and should be recompressed
  readonly LOOP_INTERVAL=15

  while true; do
    loop_start=$(date +%s)
    echo "$IDLE_TIME" > "/sys/block/$ZRAM_DEVICE/idle"
    echo type=huge > "/sys/block/$ZRAM_DEVICE/recompress"
    echo type=idle max_pages=50000 > "/sys/block/$ZRAM_DEVICE/recompress"
    echo incompressible > "/sys/block/$ZRAM_DEVICE/writeback"
    elapsed=$(( $(date +%s) - loop_start ))
    if (( elapsed < LOOP_INTERVAL )); then
      sleep $(( LOOP_INTERVAL - elapsed ))
    fi
  done
}
main

In theory zram supports up to 4 compressors (1 primary and 3 secondary) and we cold recompress into intermediate levels, but there is bug in kernel causing page timestamp to be set to 0 when page is recompressed and any idle marking will then see page as very old and recompress it again. This simple script will mark all idle pages, then try to compress huge (incompressible by primary compressor pages, even if they are not yet idle). Then it will recompress using secondary compression 50k of pages (equivalent of ~200MB of memory) - this is prevent it from stucking at this step for long time Then it will write all pages that was not incompressible by secondary compressor.

/etc/systemd/system/recompress-idle-zram.service

[Unit]
Description=Recompress idle zram swap
After=multi-user.target

[Service]
Type=simple
ExecStart=/usr/local/libexec/recompress-idle-zram
RemainAfterExit=yes
Restart=always
RestartSec=60s

[Install]
WantedBy=multi-user.target

Because using swap is very inexpensive now, you might allow kernel to swap early for better disk caches:

vm.swappiness = 150
vm.watermark_boost_factor = 0
vm.watermark_scale_factor = 10
vm.page-cluster = 0
vm.vfs_cache_pressure = 50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment