Skip to content

Instantly share code, notes, and snippets.

@philthynz
Last active February 23, 2026 12:28
Show Gist options
  • Select an option

  • Save philthynz/be7f7dee81c260f9585981a4c9980cb7 to your computer and use it in GitHub Desktop.

Select an option

Save philthynz/be7f7dee81c260f9585981a4c9980cb7 to your computer and use it in GitHub Desktop.
Homey Self Hosted Server in Kubernetes
Brand Logo

Overview

I run a single node k3s cluster at home, on a mini PC and wanted to share how to setup Homey SHS in Kubernetes with MacVlan so it has an IP from my host network.

I deploy these manifests with ArgoCD which will create each of them in the "homey" namespace as defined in the ArgoCd application. If you don't use ArgoCD, you can still use these with any deployment method such as kubectl.

There is an ingress and service manifest here, but it's not really needed as Homey will give you Cloud Access via https://my.homey.app/. I only use it to access the POD via a the local UI with a cert.

I use Let's Encrypt Certificates for the ingress. This isn't required but I like to have everything accesable with HTTPS when I access the browser UI. Bear in mind that Homey SHS will communicate with LAN devices over http using the POD IP.

References:


Multus

I use multus DHCP Daemon to configure virtual NIC's in the POD. This is an alternative to using k8s HostPort. I use ArgoCd to deploy the manifests. If you don't use ArgoCD you can just see the values below and deploy them your own way with helm.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: multus
  namespace: argo-cd
spec:
  destination:
    server: https://kubernetes.default.svc
    namespace: kube-system
  project: default
  syncPolicy:
    automated:
      allowEmpty: true
      prune: true
      selfHeal: true
    managedNamespaceMetadata:
      labels:
        argocd.argoproj.io/instance: multus
    retry:
      backoff:
        duration: 30s
        factor: 2
        maxDuration: 2m
      limit: 5
    syncOptions:
      - Validate=false
      - CreateNamespace=true
  sources:
    # https://docs.k3s.io/networking/multus-ipams
    # Multus DHCP daemon
    - repoURL: https://rke2-charts.rancher.io
      chart: rke2-multus
      targetRevision: v4.2.315
      helm:
        valuesObject:
          config:
            fullnameOverride: multus
            cni_conf:
              confDir: /var/lib/rancher/k3s/agent/etc/cni/net.d
              binDir: /var/lib/rancher/k3s/data/cni/
              kubeconfig: /var/lib/rancher/k3s/agent/etc/cni/net.d/multus.d/multus.kubeconfig
              # Comment the following line when using rke2-multus < v4.2.202
              multusAutoconfigDir: /var/lib/rancher/k3s/agent/etc/cni/net.d
          manifests:
            dhcpDaemonSet: true

Network attachment definition

This is used to give the POD a virtual NIC. The "master" is important. It should be the NIC on the k8s host where you want to bridge the network. Usually the default NIC.

---
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: macvlan-conf-dhcp
spec:
  config: '{
    "cniVersion": "0.3.0",
    "type": "macvlan",
    "master": "eno1",
    "mode": "bridge",
    "ipam": {
    "type": "dhcp"
    }
    }'

Persistent Volume Claim

This is where the config will be stored. Make backups as needed on your k8s host. Make sure to change the storage size and class as needed.

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: homey-shs-pvc
  labels:
    app: homey-shs
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: local-path
  resources:
    requests:
      storage: 5Gi

Service

This is so the POD doens't run as the k8s default service, for security reasons and if we want to expand the resources later.

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: homey-shs-sa
  labels:
    app: homey-shs

Deployment

Here we deploy Homey SHS.

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: homey-shs
  labels:
    app: homey-shs
spec:
  replicas: 1
  selector:
    matchLabels:
      app: homey-shs
  template:
    metadata:
      labels:
        app: homey-shs
      annotations:
        k8s.v1.cni.cncf.io/networks: macvlan-conf-dhcp
    spec:
      serviceAccountName: homey-shs-sa
      nodeSelector:
        kubernetes.io/os: linux
      tolerations: []
      securityContext:
        runAsUser: 0
        fsGroup: 1000
        runAsNonRoot: false
      containers:
        - name: homey-shs
          image: ghcr.io/athombv/homey-shs:12.12.0
          imagePullPolicy: IfNotPresent
          securityContext:
            allowPrivilegeEscalation: true
            privileged: true
            runAsUser: 0
          ports:
            - name: http
              containerPort: 4859
              protocol: TCP
            - name: https
              containerPort: 4860
              protocol: TCP
          env: []
          volumeMounts:
            - name: homey-shs-storage
              mountPath: /homey/user
      volumes:
        - name: homey-shs-storage
          persistentVolumeClaim:
            claimName: homey-shs-pvc

Service

Now we deploy the service so our Ingress can work. This won't be required if you choose to access the UI at the POD IP or if want to use .local on your home network. A static reservation on your router would do.

---
apiVersion: v1
kind: Service
metadata:
  name: homey-shs
  labels:
    app: homey-shs
  annotations:
    description: "LoadBalancer service for homey-shs deployment"
spec:
  type: LoadBalancer
  selector:
    app: homey-shs
  ports:
    - name: http
      port: 4859
      targetPort: http

Ingress

Now we need to make the WebUI accessible. I use ingress-nginx installed using this chart: https://github.com/kubernetes/ingress-nginx/tree/main/charts/ingress-nginx I use CloudFlare to host my custom domain under .lab.<domain> This is so I can get let's encrypt certificates using DNS challenge, more info here: https://cert-manager.io/docs/configuration/acme/dns01/cloudflare/ This won't be required if you choose to access the UI at the POD IP or if want to use .local on your home network.

---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: homey-shs
  labels:
    app: homey-shs
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-production
spec:
  ingressClassName: nginx
  rules:
    - host: homey.lab.<domain>
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: homey-shs
                port:
                  number: 4859
  tls:
    - secretName: homey-shs-tls
      hosts:
        - homey.lab.<domain>

ArgoCD Application

Last of all. These manifests are deployed by an ArgoCD Application. It's not required if you don't use ArgoCD. Argo CD will need the repo setup.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: homey
  namespace: argo-cd
spec:
  destination:
    server: https://kubernetes.default.svc
    namespace: homey
  project: default
  syncPolicy:
    automated:
      allowEmpty: true
      prune: true
      selfHeal: true
    managedNamespaceMetadata:
      labels:
        argocd.argoproj.io/instance: homey
    retry:
      backoff:
        duration: 30s
        factor: 2
        maxDuration: 2m
      limit: 5
    syncOptions:
      - Validate=true
      - CreateNamespace=true
  sources:
    - repoURL: <url>
      path: argocd/homey/manifests
      targetRevision: HEAD
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment