Skip to content

Instantly share code, notes, and snippets.

@AntonFriberg
Last active January 14, 2026 10:09
Show Gist options
  • Select an option

  • Save AntonFriberg/07675d469865062e43fb68156d449928 to your computer and use it in GitHub Desktop.

Select an option

Save AntonFriberg/07675d469865062e43fb68156d449928 to your computer and use it in GitHub Desktop.
Basic comparison between some Redis compatible solutions for easy Kubernetes deployments

Redis-Compatible Solutions for Kubernetes

Here are some simple ways to get Redis compatible solutions on Kubernetes

๐Ÿ”‘ KeyDB (Active-Active / Multi-Master)

Best for: Maximum simplicity. Both pods act as masters and sync with each other. No failover logic is required in the application.

Manual StatefulSet Deployment

apiVersion: v1
kind: ConfigMap
metadata:
  name: keydb-simple-config
data:
  keydb.conf: |
    active-replica yes
    multi-master yes
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: keydb-simple
  labels:
    app: keydb
spec:
  serviceName: keydb-simple
  replicas: 3
  selector:
    matchLabels:
      app: keydb
  template:
    metadata:
      labels:
        app: keydb
    spec:
      containers:
        - name: keydb-simple
          image: eqalpha/keydb:x86_64_v6.3.2
          command:
            - sh
            - -c
            - |
              keydb-server /etc/keydb/keydb.conf &
              sleep 10
              if [ "$HOSTNAME" != "keydb-0" ]; then
                keydb-cli replicaof keydb-0.keydb 6379
              fi
              wait
          ports:
            - containerPort: 6379
          volumeMounts:
            - name: data
              mountPath: /data
            - name: config
              mountPath: /etc/keydb
      volumes:
        - name: config
          configMap:
            name: keydb-simple-config
  volumeClaimTemplates:
    - metadata:
        name: data
      spec:
        accessModes:
          - "ReadWriteOnce"
        storageClassName: ontap-bronze-exp
        resources:
          requests:
            storage: 100Mi
---
apiVersion: v1
kind: Service
metadata:
  name: keydb-simple
spec:
  selector:
    app: keydb
  ports:
    - port: 6379
      targetPort: 6379

Helm Release

Example using helmfile.yaml syntax

repositories:
  - name: enapter
    url: https://enapter.github.io/charts/

releases:
  - name: keydb
    chart: enapter/keydb
    version: 0.48.0
    namespace: default
    values:
      - imageRepository: eqalpha/keydb
        imageTag: x86_64_v6.3.2
        nodes: 3
        port: 6379
        threads: 2
        multiMaster: "yes"
        activeReplicas: "yes"
        persistentVolume:
          enabled: true
          accessModes:
            - ReadWriteOnce
          size: 100Mi
          storageClass: ontap-bronze-exp
        resources: {}
        # Prometheus-operator ServiceMonitor
        serviceMonitor:
          # Redis exporter must also be enabled
          enabled: true
          labels:
          annotations:
          interval: 30s
        # Redis exporter
        exporter:
          enabled: true

๐Ÿ‰ Dragonfly (Primary-Replica)

Best for: High performance and multi-threaded scaling. Note that in this basic setup, dragonfly-0 is the writer and dragonfly-1 is the read-replica.

apiVersion: v1
kind: Service
metadata:
  name: dragonfly
  namespace: default
spec:
  selector:
    app: dragonfly
  ports:
    - name: dragonfly
      port: 6379
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: dragonfly
spec:
  serviceName: dragonfly
  replicas: 2
  selector:
    matchLabels:
      app: dragonfly
  template:
    metadata:
      labels:
        app: dragonfly
    spec:
      containers:
      - name: dragonfly
        image: docker.dragonflydb.io/dragonflydb/dragonfly:v1.36.0
        command: ["/bin/sh", "-c"]
        args:
          - |
            ORDINAL=${HOSTNAME##*-}
            if [ "$ORDINAL" -eq 0 ]; then
              dragonfly --logtostderr
            else
              dragonfly --logtostderr --replicaof dragonfly-0.dragonfly:6379
            fi
        volumeMounts:
        - name: data
          mountPath: /data
  volumeClaimTemplates:
    - metadata:
        name: data
      spec:
        accessModes:
          - "ReadWriteOnce"
        storageClassName: ontap-bronze-exp
        resources:
          requests:
            storage: 100Mi

Benchmark

Run simple benchmark by spawning interactive pod in the cluster

kubectl run -it --rm redis-test --image=redislabs/memtier_benchmark:2.2.1 --command -- bash

Then execute benchmark on different solutions

# keydb simple deployment
memtier_benchmark -s keydb-simple
# keydb helm deployment
memtier_benchmark -s keydb
# dragonfly deployment (need to target write replica)
memtier_benchmark -s dragonfly-0.dragonfly

Benchmark Results

Here the result overview for my perticular environment with modest hardware and NFS based PVC storage.

Solution Total Ops/sec Avg. Latency p50 Latency p99 Latency Throughput (KB/sec)
KeyDB (Simple) 481,277 0.984 ms 0.799 ms 4.927 ms 20,414
KeyDB (Helm) 497,677 0.908 ms 0.815 ms 4.927 ms 21,110
Dragonfly 228,282 0.875 ms 0.543 ms 4.319 ms 9,683

Detailed Breakdown by Operation

Solution Set (Ops/sec) Get (Ops/sec) Set p99 (ms) Get p99 (ms)
KeyDB (Simple) 43,796 437,480 5.055 4.927
KeyDB (Helm) 45,288 452,389 4.927 4.927
Dragonfly 20,773 207,509 4.383 4.287

Concluding Thoughts

Here are my key takeaways for choosing between these two solutions:

1. Throughput vs. Latency

  • KeyDB is the throughput winner: In this specific environment, KeyDB handled nearly double the operations per second compared to Dragonfly. The Helm-based deployment performed slightly better than the manual StatefulSet, likely due to optimized default threading configurations.
  • Dragonfly is the latency winner: While Dragonfly had lower throughput in this test, it consistently delivered lower response times. Its p50 latency is ~33% lower than KeyDBโ€™s, and its p99 tail latency is also superior.

2. The "Dragonfly Throughput" Paradox

Usually, Dragonfly is marketed as being significantly faster than Redis/KeyDB due to its shared-nothing architecture. The fact that it shows lower throughput here could be due to a few factors:

  • Resource Allocation: Dragonfly thrives on multi-core systems. If the Kubernetes nodes or pod limits are restricted to 1 or 2 cores, Dragonfly cannot fully utilize its multi-threaded advantage.
  • Client Bottlenecks: A single memtier_benchmark instance can sometimes become the bottleneck before the server does.

3. Operational Simplicity

  • KeyDB (Active-Active): The configuration uses multi-master yes. This is the "Holy Grail" for Kubernetes because you can point your application to a single Service, and it doesn't matter which pod receives the write. It simplifies logic significantly.
  • Dragonfly (Primary-Replica): In the current setup, dragonfly-0 is the designated writer. If that pod dies, our application will fail to write until a manual or orchestrated failover occurs (ideally the dragonflydb-operator should be used instead).

Final Recommendation

  • Choose KeyDB if you want the easiest possible migration from Redis with built-in high availability (Active-Active) and high throughput on modest hardware.
  • Choose Dragonfly if you are running on very large multi-core instances and your primary goal is minimizing tail latency (p99) for a smoother user experience.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment