The following is a quick guide on getting basic status LED functionality working with TrueNAS running on the UGREEN DXP4800 Plus. Theoretically, it should work on all models (with some small revisions to the script), but I only have a DXP4800 Plus. :)
This guide is for cron job that runs a script to update the LEDs every couple minutes, but I'm sure the following can be modified for blinky LEDs as well.
- Manually build or download the
ugreen_leds_clitool from https://github.com/miskcoo/ugreen_dx4600_leds_controller. - Plop it somewhere on your NAS (E.g. a dataset).
- In the same dataset, create your
.shscript that controls the LEDs. At the bottom of this gist is my modified version of meyergru's. - Make the script executable:
chmod +X your-script.sh.- You may also need to make
ugreen_leds_cliexecutable as well.
- You may also need to make
- In TrueNas, navigate over to
System Settings→Advanced - Under
Init/Shutdown ScriptsCreate the following to load thei2c-devmodule on boot:- Description:
Enable i2c-dev - Type:
Command - Command:
modprobe i2c-dev - When:
Pre Init
- Description:
- Under
Cron Jobswe then create a task to run every x minutes:- Description:
Update Status LEDS - Command:
/mnt/path/to/your/script.sh - Run as User:
root - Schedule:
*/5 * * * *(or however often you desire)
- Description:
- Reboot and wait a bit for your cron job to run.
- https://github.com/miskcoo/ugreen_dx4600_leds_controller
- https://github.com/meyergru/ugreen_dxp8800_leds_controller
#! /bin/bash
#set -x
SCRIPTPATH=$(dirname "$0")
echo $SCRIPTPATH
devices=(p n x x x x)
map=(power netdev disk1 disk2 disk3 disk4)
# Check network status
gw=$(ip route | awk '/default/ { print $3 }')
if ping -q -c 1 -W 1 $gw >/dev/null; then
devices[1]=u
fi
# Map sdX1 to hardware device
declare -A hwmap
echo "Mapping devices..."
while read line; do
MAP=($line)
device=${MAP[0]}
hctl=${MAP[1]}
partitions=$(lsblk -l -o NAME | grep "^${device}[0-9]\+$")
for part in $partitions; do
hwmap[$part]=${hctl:0:1}
echo "Mapped $part to ${hctl:0:1}"
done
done <<< "$(lsblk -S -o NAME,HCTL | tail -n +2)"
# Print the hwmap for verification
echo "Hardware mapping (hwmap):"
for key in "${!hwmap[@]}"; do
echo "$key: ${hwmap[$key]}"
done
# Check status of zpool disks
echo "Checking zpool status..."
while read line; do
DEV=($line)
partition=${DEV[0]}
echo "Processing $partition with status ${DEV[1]}"
if [[ -n "${hwmap[$partition]}" ]]; then
index=$((${hwmap[$partition]} + 2))
echo "Device $partition maps to index $index"
if [ ${DEV[1]} = "ONLINE" ]; then
devices[$index]=o
else
devices[$index]=f
fi
else
echo "Warning: No mapping found for $partition"
fi
done <<< "$(zpool status -L | grep -E '^\s+sd[a-h][0-9]')"
# Output the final device statuses
echo "Final device statuses:"
for i in "${!devices[@]}"; do
echo "$i: ${devices[$i]}"
case "${devices[$i]}" in
p)
"$SCRIPTPATH/ugreen_leds_cli" ${map[$i]} -color 255 255 255 -on -brightness 64
;;
u)
"$SCRIPTPATH/ugreen_leds_cli" ${map[$i]} -color 255 255 255 -on -brightness 64
;;
o)
"$SCRIPTPATH/ugreen_leds_cli" ${map[$i]} -color 0 255 0 -on -brightness 64
;;
f)
"$SCRIPTPATH/ugreen_leds_cli" ${map[$i]} -color 255 0 0 -blink 400 600 -brightness 64
;;
*)
"$SCRIPTPATH/ugreen_leds_cli" ${map[$i]} -off
;;
esac
done

I don't have much experience with DSM. I know it uses its own modified RAID system (on older kernels like 4.x and 5.x), but not much beyond that. At some point I considered running DSM in a VM on Proxmox, but I discarded the idea because it seemed like it would cause more problems (in addition to those already inherent to Proxmox/VMware):
Major DSM releases often change the partition layout, which makes it impossible to upgrade ARC (and obviously DSM).
ARC minor upgrades are far from perfect — something fails and you get stuck.
It's a patched OS. If something breaks completely, I know I can install TrueNAS directly and import my ZFS pool in minutes. Maybe the same is possible with a patched DSM, but I’m not sure.
Lastly, DSM needs days to build parity with large disks (e.g., 6×16 TB). ZFS does NOT need to initialize parity when you create a vdev. I prefer not to stress disks unnecessarily.