Skip to content

Instantly share code, notes, and snippets.

@Kerryliu
Last active December 7, 2025 16:47
Show Gist options
  • Select an option

  • Save Kerryliu/c380bb6b3b69be5671105fc23e19b7e8 to your computer and use it in GitHub Desktop.

Select an option

Save Kerryliu/c380bb6b3b69be5671105fc23e19b7e8 to your computer and use it in GitHub Desktop.
TrueNAS UGREEN DXP4800 Plus Status LED Guide

UGREEN DXP4800 Plus TrueNAS Status LED Guide

20240609_035951642_iOS

The following is a quick guide on getting basic status LED functionality working with TrueNAS running on the UGREEN DXP4800 Plus. Theoretically, it should work on all models (with some small revisions to the script), but I only have a DXP4800 Plus. :)

This guide is for cron job that runs a script to update the LEDs every couple minutes, but I'm sure the following can be modified for blinky LEDs as well.

Steps:

  1. Manually build or download the ugreen_leds_cli tool from https://github.com/miskcoo/ugreen_dx4600_leds_controller.
  2. Plop it somewhere on your NAS (E.g. a dataset).
  3. In the same dataset, create your .sh script that controls the LEDs. At the bottom of this gist is my modified version of meyergru's.
  4. Make the script executable: chmod +X your-script.sh.
    • You may also need to make ugreen_leds_cli executable as well.
  5. In TrueNas, navigate over to System SettingsAdvanced
  6. Under Init/Shutdown Scripts Create the following to load the i2c-dev module on boot:
    • Description: Enable i2c-dev
    • Type: Command
    • Command: modprobe i2c-dev
    • When: Pre Init
  7. Under Cron Jobs we then create a task to run every x minutes:
    • Description: Update Status LEDS
    • Command: /mnt/path/to/your/script.sh
    • Run as User: root
    • Schedule: */5 * * * * (or however often you desire)
  8. Reboot and wait a bit for your cron job to run.

Sources:

Example script:

#! /bin/bash

#set -x

SCRIPTPATH=$(dirname "$0")
echo $SCRIPTPATH

devices=(p n x x x x)
map=(power netdev disk1 disk2 disk3 disk4)

# Check network status
gw=$(ip route | awk '/default/ { print $3 }')
if ping -q -c 1 -W 1 $gw >/dev/null; then
    devices[1]=u
fi

# Map sdX1 to hardware device
declare -A hwmap
echo "Mapping devices..."
while read line; do
    MAP=($line)
    device=${MAP[0]}
    hctl=${MAP[1]}
    partitions=$(lsblk -l -o NAME | grep "^${device}[0-9]\+$")
    for part in $partitions; do
        hwmap[$part]=${hctl:0:1}
        echo "Mapped $part to ${hctl:0:1}"
    done
done <<< "$(lsblk -S -o NAME,HCTL | tail -n +2)"

# Print the hwmap for verification
echo "Hardware mapping (hwmap):"
for key in "${!hwmap[@]}"; do
    echo "$key: ${hwmap[$key]}"
done

# Check status of zpool disks
echo "Checking zpool status..."
while read line; do
    DEV=($line)
    partition=${DEV[0]}
    echo "Processing $partition with status ${DEV[1]}"
    if [[ -n "${hwmap[$partition]}" ]]; then
        index=$((${hwmap[$partition]} + 2))
        echo "Device $partition maps to index $index"
        if [ ${DEV[1]} = "ONLINE" ]; then
            devices[$index]=o
        else
            devices[$index]=f
        fi
    else
        echo "Warning: No mapping found for $partition"
    fi
done <<< "$(zpool status -L | grep -E '^\s+sd[a-h][0-9]')"

# Output the final device statuses
echo "Final device statuses:"
for i in "${!devices[@]}"; do
    echo "$i: ${devices[$i]}"
    case "${devices[$i]}" in
        p)
            "$SCRIPTPATH/ugreen_leds_cli" ${map[$i]} -color 255 255 255 -on -brightness 64
            ;;
        u)
            "$SCRIPTPATH/ugreen_leds_cli" ${map[$i]} -color 255 255 255 -on -brightness 64
            ;;
        o)
            "$SCRIPTPATH/ugreen_leds_cli" ${map[$i]} -color 0 255 0 -on -brightness 64
            ;;
        f)
            "$SCRIPTPATH/ugreen_leds_cli" ${map[$i]} -color 255 0 0 -blink 400 600 -brightness 64
            ;;
        *)
            "$SCRIPTPATH/ugreen_leds_cli" ${map[$i]} -off
            ;;
    esac
done
@patchrick84
Copy link

For those running TrueNAS Scale (CE) inside a Proxmox 9 VM, this script leverages the QEMU Guest Agent (installed by default) to query the VM for disk and zpool status. This allows you to visualize the state of your virtualized storage directly on the NAS hardware, even when running in a virtualized environment.

Please read the comments.

Any chance something like this can be adapted to running DSM in a VM with the SATA controller(s) passed through? I'm not proficient enough to modify the script myself but might be able to limp along with some guidance.

@jagaliano
Copy link

jagaliano commented Dec 3, 2025

I don't have much experience with DSM. I know it uses its own modified RAID system (on older kernels like 4.x and 5.x), but not much beyond that. At some point I considered running DSM in a VM on Proxmox, but I discarded the idea because it seemed like it would cause more problems (in addition to those already inherent to Proxmox/VMware):

  • Major DSM releases often change the partition layout, which makes it impossible to upgrade ARC (and obviously DSM).

  • ARC minor upgrades are far from perfect — something fails and you get stuck.

  • It's a patched OS. If something breaks completely, I know I can install TrueNAS directly and import my ZFS pool in minutes. Maybe the same is possible with a patched DSM, but I’m not sure.

Lastly, DSM needs days to build parity with large disks (e.g., 6×16 TB). ZFS does NOT need to initialize parity when you create a vdev. I prefer not to stress disks unnecessarily.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment