Skip to content

Instantly share code, notes, and snippets.

@casebeer
Last active August 26, 2025 01:09
Show Gist options
  • Select an option

  • Save casebeer/9cb9ea8e5863188877b621149611d85a to your computer and use it in GitHub Desktop.

Select an option

Save casebeer/9cb9ea8e5863188877b621149611d85a to your computer and use it in GitHub Desktop.
Working Frigate NVR configuration files and setup steps for rootless Docker on Ubuntu 24.04.
#
# Basic working Frigate NVR config with internal test pattern generation
# - For intel graphics
# - With PCI Coral TPU
# - Recording disabled by default (enable per camera) but default retention configured
# - With ffmpeg-generated test pattern as initial camera for debugging
# WARNING Disable test pattern camera and go2rtc config to minimize CPU usage once initial setup is complete
#
mqtt:
enabled: false
# Configure database outside of config dir; probably moot if config dir mount
# remains writable for config.yml and model_cache
#database:
# path: /db/frigate.db
ffmpeg:
#hwaccel_args: preset-vaapi
hwaccel_args: preset-intel-qsv-h264
detectors:
coral:
type: edgetpu
device: pci
record:
enabled: false
retain:
days: 7
mode: all
alerts:
retain:
days: 30
detections:
retain:
days: 30
snapshots:
enabled: false
retain:
default: 30
objects:
track:
- person
go2rtc:
streams:
# TODO: disableme when testing complete to reduce CPU usage
internal_test_pattern: exec:/usr/lib/ffmpeg/7.0/bin/ffmpeg -f lavfi -i
testsrc2=s=427x240:r=10 -c:v libx264 -preset ultrafast -rtsp_transport tcp
-f rtsp {{output}}
cameras:
# Test pattern generated by internal go2rtc/ffmpeg
# TODO: disableme when testing complete to reduce CPU usage
internal-test-pattern:
detect:
enabled: true
width:
height:
fps: 5
ffmpeg:
inputs:
- path: rtsp://127.0.0.1:8554/internal_test_pattern
roles:
- detect
# example-rtsp-camera:
# detect:
# enabled: true
# width:
# height:
# fps: 5
# ffmpeg:
# #input_args: preset-rtsp-udp
# inputs:
# - path: rtsp://hostname.example.com:554/camera-path
# roles:
# - detect
# Assumes rootless Docker setup: https://gist.github.com/casebeer/55827d2eafc83319aac54f3f7840afdf
# and filesystem ACLs installed and configured on the root and docker-compose config filesystems.
services:
frigate:
image: ghcr.io/blakeblackshear/frigate:0.15.2
#image: ghcr.io/blakeblackshear/frigate:0.16.0
#image: ghcr.io/blakeblackshear/frigate:stable
#
# image tags:
# stable-tensorrt Nvidia GPU
# stable-rocm AMD GPU
# stable amd64 + RPi
#
container_name: frigate
restart: unless-stopped
stop_grace_period: 30s # allow enough time to shut down the various services
shm_size: "512mb" # update for your cameras based on calculation above
# Create frigate configuration directories in same folder as docker-compose.yml
# then grant Docker inside-container-root subuid default facl read/write access to them:
#
# mkdir -p frigate/{config,db}
# sudo setfacl -R -m d:u:200000:rwx -m u:200000:rwx frigate/
#
# You might want to grant your own user default facls to any files Frigate creates as well:
#
# sudo setfacl -R -m d:u:${USER}:rwx -m u:${USER}:rwx frigate/
volumes:
- /usr/share/zoneinfo/America/New_York:/etc/localtime:ro # force localtime for video timestamps (since host should be on UTC)
- ./frigate/config:/config
- ./frigate/db:/db # must override default db location in config
- /mnt/nas-storage-mountpoint/frigate:/media/frigate # should be NAS storage to avoid local SSD wear
- type: tmpfs # Optional: 1GB of memory, reduces SSD/SD Card wear
target: /tmp/cache
tmpfs:
size: 1000000000
ports:
- "8971:8971" # HTTPS TLS 1.3 (w/ untrusted cert) and frigate auth. Admin password in first run container logs.
#- "5000:5000" # HTTP unauthenticated admin access. Expose carefully.
- "8554:8554" # RTSP feeds
- "8555:8555/tcp" # WebRTC over tcp
- "8555:8555/udp" # WebRTC over udp
# Must install Coral drivers on host, n.b. apt packages for Coral drivers are broken in Ubuntu 2023 and later,
# so must compile a .deb manually then pin the deb version, see install-coral-drivers.sh
#
# Test video card driver and permission in and out of container with
# ffmpeg -vaapi_device /dev/dri/renderD128
# vainfo
devices:
- /dev/apex_0:/dev/apex_0 # Passes a PCIe Coral, follow driver instructions
- /dev/dri/renderD128:/dev/dri/renderD128 # For intel hwaccel, needs to be updated for your hardware
# These groups must be container-internal mapped GIDs corresponding to the host's render and apex groups.
# First, determine the host's GIDs for render and apex:
# getent group render ; getent group apex
# Then append single group mappings to /etc/subgid allowing the dockremap host user to use the groups:
#
# echo "dockremap:$(getent group render | cut -d : -f 3):1" | sudo tee -a /etc/subgid
# echo "dockremap:$(getent group apex | cut -d : -f 3):1" | sudo tee -a /etc/subgid
#
# Note that /etc/subgid allows multiple lines per host user.
# Restart the docker daemon
#
# systemctl restart docker
#
# Next, we'll need the mapped (container-internal) GID.
#
# One way to get this is to count all previously mapped subgids for the dockremap user and add one;
# e.g. if the subgid file is:
#
# dockermap:200000:65535
# dockremap:993:1
# dockremap:1001:1
#
# Then 1001 (e.g. apex) will be mapped to 65537 = (65535 + 1) + 1
#
# Otherwise, restart the docker daemon and inspect the mapped groups from inside the container:
#
# $ docker compose exec frigate ls -l /dev/dri/renderD128
# crw-rw---- 1 nobody 65536 226, 128 Aug 25 20:35 /dev/dri/renderD128
# $ docker compose exec frigate ls -l /dev/apex_0
# crw-rw---- 1 nobody 65537 120, 0 Aug 25 20:35 /dev/apex_0
#
# Now, modify docker-compose to group_add those GIDs to the container:
group_add:
- 65536 # subgid mapped GID for host's render group TODO: CHECK MAPPED GID IS CORRECT
- 65537 # subgid mapped GID for host's apex group TODO: CHECK MAPPED GID IS CORRECT
#
# Build, install, and pin Coral TPU drivers in Ubuntu 24.04 using https://github.com/jnicolson/gasket-builder
#
# Coral drivers are broken in Ubuntu 2023 and later.
# The instructions at https://coral.ai/docs/m2/get-started/#2a-on-linux will no longer work.
#
# The gasket-dkms driver must be manually patched and build, then installed as a .deb and its version pinned.
# jnicolson has create a Dockerfile that will patch and build the driver; See https://github.com/jnicolson/gasket-builder
#
# First follow the instructions at https://coral.ai/docs/m2/get-started/#2a-on-linux
# But skip installing their gasket-dkms package, which would fail to install:
echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
sudo apt-get update
sudo apt-get install libedgetpu1-std
# permissions per Coral docs
sudo sh -c "echo 'SUBSYSTEM==\"apex\", MODE=\"0660\", GROUP=\"apex\"' >> /etc/udev/rules.d/65-apex.rules"
sudo groupadd apex
# optionally add your own host user to the apex group
sudo adduser $USER apex
# Build the patched .deb:
git clone https://github.com/jnicolson/gasket-builder
cd gasket-builder
docker build --output . .
# Install the deb
sudo dpkg -i gasket-dkms_1.0-18_all.deb
# Pin the deb
sudo tee /etc/apt/preferences.d/pin-gasket-dkms <<EOF
Package: gasket-dkms
Pin: origin "packages.cloud.google.com"
Pin-Priority: -1
EOF
# /dev/apex_0 permissions
#
# We'll get the (rootless) Docker container access to /dev/apex_0 by mapping the host apex group's GID into the container's
# userns via /etc/subgid.
#
# First, append a line to /etc/subgid allowing the dockremap user to map the host apex GID:
echo "dockremap:$(getent group apex | cut -d : -f 3):1" | sudo tee -a /etc/subgid
# Next, find the mapped GID and configure docker-compose.yml to group_add that *mapped* GID to the container.
# See docker-compose.yml
# Reboot
sudo reboot
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment