Skip to content

Instantly share code, notes, and snippets.

@lenaschoenburg
Last active January 13, 2026 16:54
Show Gist options
  • Select an option

  • Save lenaschoenburg/8734dc9a1a010612cf92d4531d3dc260 to your computer and use it in GitHub Desktop.

Select an option

Save lenaschoenburg/8734dc9a1a010612cf92d4531d3dc260 to your computer and use it in GitHub Desktop.
Generated default Camunda config
camunda:
api:
grpc:
# Sets the address the gateway binds to
address: null # CAMUNDA_API_GRPC_ADDRESS
# Sets the interceptors
interceptors: null # CAMUNDA_API_GRPC_INTERCEPTORS
# Sets the number of threads the gateway will use to communicate with the broker cluster
management-threads: null # CAMUNDA_API_GRPC_MANAGEMENTTHREADS
# Sets the minimum keep alive interval. This setting specifies the minimum accepted interval between keep alive pings. This value must be specified as a positive integer followed by 's' for seconds, 'm' for minutes or 'h' for hours.
min-keep-alive-interval: "30s" # CAMUNDA_API_GRPC_MINKEEPALIVEINTERVAL
# Sets the port the gateway binds to
port: null # CAMUNDA_API_GRPC_PORT
# Sets the ssl configuration for the gateway
ssl: null # CAMUNDA_API_GRPC_SSL
long-polling:
# Enables long polling for available jobs
enabled: null # CAMUNDA_API_LONGPOLLING_ENABLED
# Set the number of minimum empty responses, a minimum number of responses with jobCount of 0 infers that no jobs are available
min-empty-responses: null # CAMUNDA_API_LONGPOLLING_MINEMPTYRESPONSES
# Set the probe timeout for long polling in milliseconds
probe-timeout: null # CAMUNDA_API_LONGPOLLING_PROBETIMEOUT
# Set the timeout for long polling in milliseconds
timeout: null # CAMUNDA_API_LONGPOLLING_TIMEOUT
rest:
executor:
# Multiplier applied to the number of available processors to compute the executor's core pool size (minimum number of threads kept alive). Effective value: {@code corePoolSize = availableProcessors * corePoolSizeMultiplier}. Use a higher value if you have steady, continuous traffic and want to minimize cold-start latency; keep it low to allow the pool to scale down when idle. Default value: 1 (as defined in {@code ApiExecutorConfiguration#DEFAULT_CORE_POOL_SIZE_MULTIPLIER})
core-pool-size-multiplier: 1 # CAMUNDA_API_REST_EXECUTOR_COREPOOLSIZEMULTIPLIER
# Time in seconds that threads above the core size may remain idle before being terminated. Lower values reclaim resources faster after bursts; higher values reduce thread creation/destruction churn if bursts are frequent. Default value: 60 (as defined in {@code ApiExecutorConfiguration#DEFAULT_KEEP_ALIVE_SECONDS})
keep-alive: "60s" # CAMUNDA_API_REST_EXECUTOR_KEEPALIVE
# Multiplier applied to the number of available processors to compute the executor's maximum pool size (hard cap on threads). Effective value: {@code maxPoolSize = availableProcessors * maxPoolSizeMultiplier}. Must be >= {@code corePoolSizeMultiplier}. Increase cautiously; high values can cause oversubscription for CPU-bound workloads. Default value: 2 (as defined in {@code ApiExecutorConfiguration#DEFAULT_MAX_POOL_SIZE_MULTIPLIER})
max-pool-size-multiplier: 2 # CAMUNDA_API_REST_EXECUTOR_MAXPOOLSIZEMULTIPLIER
# Capacity of the executor's task queue. A small bounded queue (e.g. 64) is recommended to handle short bursts while still allowing the pool to grow. Default value: 64 (as defined in ApiExecutorConfiguration#DEFAULT_QUEUE_CAPACITY)
queue-capacity: 64 # CAMUNDA_API_REST_EXECUTOR_QUEUECAPACITY
process-cache:
# Process cache expiration
expiration-idle: "0ms" # CAMUNDA_API_REST_PROCESSCACHE_EXPIRATIONIDLE
# Process cache max size
max-size: 100 # CAMUNDA_API_REST_PROCESSCACHE_MAXSIZE
cluster:
# Set the cluster id of the cluster. This setting is used to identify the cluster and should be unique across clusters. If not configured, the cluster ID will be set with a new random UUID.
cluster-id: null # CAMUNDA_CLUSTER_CLUSTERID
# Configure compression algorithm for all message sent between the brokers and between the broker and the gateway. Available options are NONE, GZIP and SNAPPY. This feature is useful when the network latency between the brokers is very high (for example when the brokers are deployed in different data centers). When latency is high, the network bandwidth is severely reduced. Hence enabling compression helps to improve the throughput. Note: When there is no latency enabling this may have a performance impact.
compression-algorithm: "none" # CAMUNDA_CLUSTER_COMPRESSIONALGORITHM
global-listeners:
user-task: null # CAMUNDA_CLUSTER_GLOBALLISTENERS_USERTASK
# Allows to specify a list of known other nodes to connect to on startup. The contact points of the internal network configuration must be specified. The format is [HOST:PORT] Example: initialContactPoints : [ 192.168.1.22:26502, 192.168.1.32:26502 ] To guarantee the cluster can survive network partitions, all nodes must be specified as initial contact points.
initial-contact-points: null # CAMUNDA_CLUSTER_INITIALCONTACTPOINTS
membership:
# Configure whether to broadcast disputes to all members. If set to true the network traffic may increase but it reduce the time to detect membership changes.
broadcast-disputes: true # CAMUNDA_CLUSTER_MEMBERSHIP_BROADCASTDISPUTES
# Configure whether to broadcast member updates to all members. If set to false updates will be gossiped among the members. If set to true the network traffic may increase but it reduce the time to detect membership changes.
broadcast-updates: false # CAMUNDA_CLUSTER_MEMBERSHIP_BROADCASTUPDATES
# Sets the timeout for a suspect member is declared dead.
failure-timeout: "10000ms" # CAMUNDA_CLUSTER_MEMBERSHIP_FAILURETIMEOUT
# Sets the number of members to which membership updates are sent at each gossip interval.
gossip-fanout: 2 # CAMUNDA_CLUSTER_MEMBERSHIP_GOSSIPFANOUT
# Sets the interval at which the membership updates are sent to a random member.
gossip-interval: "250ms" # CAMUNDA_CLUSTER_MEMBERSHIP_GOSSIPINTERVAL
# Configure whether to notify a suspect node on state changes.
notify-suspect: false # CAMUNDA_CLUSTER_MEMBERSHIP_NOTIFYSUSPECT
# Sets the interval at which to probe a random member.
probe-interval: "1000ms" # CAMUNDA_CLUSTER_MEMBERSHIP_PROBEINTERVAL
# Sets the timeout for a probe response.
probe-timeout: "100ms" # CAMUNDA_CLUSTER_MEMBERSHIP_PROBETIMEOUT
# Sets the number of probes failed before declaring a member is suspect.
suspect-probes: 3 # CAMUNDA_CLUSTER_MEMBERSHIP_SUSPECTPROBES
# Sets the interval at which this member synchronizes its membership information with a random member.
sync-interval: "10000ms" # CAMUNDA_CLUSTER_MEMBERSHIP_SYNCINTERVAL
metadata:
# The number of nodes to which a cluster topology is gossiped.
gossip-fanout: null # CAMUNDA_CLUSTER_METADATA_GOSSIPFANOUT
# The delay between two sync requests in the ClusterConfigurationManager. A sync request is sent to another node to get the latest topology of the cluster.
sync-delay: null # CAMUNDA_CLUSTER_METADATA_SYNCDELAY
sync-initializer-delay: null # CAMUNDA_CLUSTER_METADATA_SYNCINITIALIZERDELAY
# The timeout for a sync request in the ClusterConfigurationManager.
sync-request-timeout: null # CAMUNDA_CLUSTER_METADATA_SYNCREQUESTTIMEOUT
# Set the name of the cluster
name: null # CAMUNDA_CLUSTER_NAME
network:
# Controls the advertised host. This is particularly useful if your broker stands behind a proxy. If not set, its default is computed as: - If camunda.cluster.network.host was explicitly, use this. - If not, try to resolve the machine's hostname to an IP address and use that. - If the hostname is not resolvable, use the first non-loopback IP address. - If there is none, use the loopback address.
advertised-host: null # CAMUNDA_CLUSTER_NETWORK_ADVERTISEDHOST
command-api:
# Controls the advertised host. This is particularly useful if your broker stands behind a proxy. If omitted, defaults to: If zeebe.broker.network.commandApi.host was set, use this. Use the resolved value of zeebe.broker.network.advertisedHost
advertised-host: null # CAMUNDA_CLUSTER_NETWORK_COMMANDAPI_ADVERTISEDHOST
# Controls the advertised port; if omitted defaults to the port. This is particularly useful if your broker stands behind a proxy.
advertised-port: null # CAMUNDA_CLUSTER_NETWORK_COMMANDAPI_ADVERTISEDPORT
# Overrides the host used for gateway-to-broker communication
host: null # CAMUNDA_CLUSTER_NETWORK_COMMANDAPI_HOST
# Sets the port used for gateway-to-broker communication
port: null # CAMUNDA_CLUSTER_NETWORK_COMMANDAPI_PORT
# Sends a heartbeat when no other data is sent over an open connection within the specified timeout. This is to ensure that the connection is kept open.
heartbeat-interval: "5s" # CAMUNDA_CLUSTER_NETWORK_HEARTBEATINTERVAL
# Connections that did not receive any message within the specified timeout will be closed.
heartbeat-timeout: "15s" # CAMUNDA_CLUSTER_NETWORK_HEARTBEATTIMEOUT
# Controls the default host the broker should bind to. Can be overwritten on a per-binding basis for client, management and replication
host: null # CAMUNDA_CLUSTER_NETWORK_HOST
internal-api:
# Controls the advertised host. This is particularly useful if your broker stands behind a proxy. If omitted, defaults to: If zeebe.broker.network.internalApi.host was set, use this. Use the resolved value of zeebe.broker.network.advertisedHost
advertised-host: null # CAMUNDA_CLUSTER_NETWORK_INTERNALAPI_ADVERTISEDHOST
# Controls the advertised port; if omitted defaults to the port. This is particularly useful if your broker stands behind a proxy.
advertised-port: null # CAMUNDA_CLUSTER_NETWORK_INTERNALAPI_ADVERTISEDPORT
# Overrides the host used for internal broker-to-broker communication
host: null # CAMUNDA_CLUSTER_NETWORK_INTERNALAPI_HOST
# Sets the port used for internal broker-to-broker communication
port: null # CAMUNDA_CLUSTER_NETWORK_INTERNALAPI_PORT
# Sets the maximum size of the incoming and outgoing messages (i.e. commands and events).
max-message-size: "4MB" # CAMUNDA_CLUSTER_NETWORK_MAXMESSAGESIZE
# If a port offset is set it will be added to all ports specified in the config or the default values. This is a shortcut to not always specifying every port. The offset will be added to the second last position of the port, as Zeebe requires multiple ports. As example a portOffset of 5 will increment all ports by 50, i.e. 26500 will become 26550 and so on.
port-offset: 0 # CAMUNDA_CLUSTER_NETWORK_PORTOFFSET
# Sets the size of the socket receive buffer (SO_RCVBUF), for example 1MB. When not set (the default), the operating system can determine the optimal size automatically.
socket-receive-buffer: null # CAMUNDA_CLUSTER_NETWORK_SOCKETRECEIVEBUFFER
# Sets the size of the socket send buffer (SO_SNDBUF), for example 1MB. When not set (the default), the operating system can determine the optimal size automatically.
socket-send-buffer: null # CAMUNDA_CLUSTER_NETWORK_SOCKETSENDBUFFER
node-id: null # CAMUNDA_CLUSTER_NODEID
node-id-provider:
# Set the {@link Type} of the implementation for the provider of dynamic node id. {@link Type#FIXED} refers to no provider.
type: "fixed" # CAMUNDA_CLUSTER_NODEIDPROVIDER_TYPE
# The number of partitions in the cluster.
partition-count: 1 # CAMUNDA_CLUSTER_PARTITIONCOUNT
partitioning:
# The list of fixed partition configurations for this partitioning setup. Initialized as an empty, list by default. Used when the {@link Scheme#FIXED} partitioning scheme is selected.
fixed: null # CAMUNDA_CLUSTER_PARTITIONING_FIXED
# The partitioning scheme used for assigning partitions to nodes. Defaults to {@link Scheme#ROUND_ROBIN}.
scheme: "round-robin" # CAMUNDA_CLUSTER_PARTITIONING_SCHEME
raft:
# Sets the timeout for configuration change requests such as joining or leaving. Since changes are usually a multi-step process with multiple commits, a higher timeout than the default requestTimeout is recommended.
configuration-change-timeout: "10s" # CAMUNDA_CLUSTER_RAFT_CONFIGURATIONCHANGETIMEOUT
# The election timeout for Raft. If a follower does not receive a heartbeat from the leader within an election timeout, it can start a new leader election. electionTimeout should be greater than configured heartbeatInterval. When the electionTimeout is large, there will be delay in detecting a leader failure. When the electionTimeout is small, it can lead to false positives when detecting leader failures and thus leading to unnecessary leader changes. If the network latency between the nodes is high, it is recommended to have a higher election timeout. This is an advanced setting.
election-timeout: "2500ms" # CAMUNDA_CLUSTER_RAFT_ELECTIONTIMEOUT
# If the delay is > 0, then flush requests are delayed by at least the given period. It is recommended that you find the smallest delay here with which you achieve your performance goals. It's also likely that anything above 30s is not useful, as this is the typical default flush interval for the Linux OS. The default behavior is optimized for safety, and flushing occurs on every leader commit and follower append in a synchronous fashion.
flush-delay: 0 # CAMUNDA_CLUSTER_RAFT_FLUSHDELAY
# If false, explicit flushing of the Raft log is disabled, and flushing only occurs right before a snapshot is taken. You should only disable explicit flushing if you are willing to accept potential data loss at the expense of performance. Before disabling it, try the delayed options, which provide a trade-off between safety and performance. By default, for a given partition, data is flushed on every leader commit, and every follower append. This is to ensure consistency across all replicas. Disabling this can cause inconsistencies, and at worst, data corruption or data loss scenarios.
flush-enabled: true # CAMUNDA_CLUSTER_RAFT_FLUSHENABLED
# The heartbeat interval for Raft. The leader sends a heartbeat to a follower every heartbeatInterval. This is an advanced setting.
heartbeat-interval: "250ms" # CAMUNDA_CLUSTER_RAFT_HEARTBEATINTERVAL
# Sets the maximum batch size, which is send per append request to a follower.
max-append-batch-size: null # CAMUNDA_CLUSTER_RAFT_MAXAPPENDBATCHSIZE
# Sets the maximum of appends which are send per follower.
max-appends-per-follower: null # CAMUNDA_CLUSTER_RAFT_MAXAPPENDSPERFOLLOWER
# If the leader is not able to reach the quorum, the leader may step down. This is triggered if the leader is not able to reach the quorum of the followers for maxQuorumResponseTimeout. The minStepDownFailureCount also influences when the leader step down. Higher the timeout, slower the leader reacts to a partial network partition. When the timeout is lower, there might be false positives, and the leader might step down too quickly. When this value is 0, it will use a default value of electionTimeout * 2.
max-quorum-response-timeout: "0s" # CAMUNDA_CLUSTER_RAFT_MAXQUORUMRESPONSETIMEOUT
# If the leader is not able to reach the quorum, the leader may step down. This is triggered after a number of requests, to a quorum of followers, has failed, and the number of failures reached minStepDownFailureCount. The maxQuorumResponseTime also influences when the leader step down.
min-step-down-failure-count: 3 # CAMUNDA_CLUSTER_RAFT_MINSTEPDOWNFAILURECOUNT
# Defines whether segment files are pre-allocated to their full size on creation or not. If true, when a new segment is created on demand, disk space will be reserved for its full maximum size. This helps avoid potential out of disk space errors which can be fatal when using memory mapped files, especially when running on network storage. In the best cases, it will also allocate contiguous blocks, giving a small performance boost. You may want to turn this off if your system does not support efficient file allocation via system calls, or if you notice an I/O penalty when creating segments.
preallocate-segment-files: true # CAMUNDA_CLUSTER_RAFT_PREALLOCATESEGMENTFILES
# Threshold used by the leader to decide between replicating a snapshot or records. The unit is number of records by which the follower may lag behind before the leader prefers replicating snapshots instead of records.
prefer-snapshot-replication-threshold: 100 # CAMUNDA_CLUSTER_RAFT_PREFERSNAPSHOTREPLICATIONTHRESHOLD
# When this flag is enabled, the leader election algorithm attempts to elect the leaders based on a pre-defined priority. As a result, it tries to distribute the leaders uniformly across the brokers. Note that it is only a best-effort strategy. It is not guaranteed to be a strictly uniform distribution.
priority-election-enabled: true # CAMUNDA_CLUSTER_RAFT_PRIORITYELECTIONENABLED
# Sets the timeout for all requests send by raft leaders and followers.When modifying the values for requestTimeout, it might also be useful to update snapshotTimeout.
request-timeout: null # CAMUNDA_CLUSTER_RAFT_REQUESTTIMEOUT
# Defines the strategy to use to preallocate segment files when "preallocateSegmentFiles" is set to true. Possible options are: NOOP: does not preallocate files, same as setting `preallocateSegmentFiles=false` FILL: fills the new segments with zeroes to ensure the disk space is reserved and the file is initialized with zeroes POSIX: reserves the space required on disk using `fallocate` posix system call. Depending on the filesystem, this may not ensure that enough disk space is available. This strategy reduces the write throughput to disk which can be particularly useful when using network file systems. Running `fallocate` requires a POSIX filesystem and JNI calls which might not be available. If you want to make sure that `fallocate` is used, configure this strategies, otherwise use the below ones. POSIX_OR_NOOP: use POSIX strategy or NOOP if it's not possible. POSIX_OR_FILL: use POSIX strategy or FILL if it's not possible.
segment-preallocation-strategy: "posix-or-fill" # CAMUNDA_CLUSTER_RAFT_SEGMENTPREALLOCATIONSTRATEGY
# Sets the maximum size of snapshot chunks sent by raft leaders to the followers.
snapshot-chunk-size: null # CAMUNDA_CLUSTER_RAFT_SNAPSHOTCHUNKSIZE
# Sets the timeout for all snapshot requests sent by raft leaders to the followers. If the network latency between brokers is high, it would help to set a higher timeout here.
snapshot-request-timeout: null # CAMUNDA_CLUSTER_RAFT_SNAPSHOTREQUESTTIMEOUT
# The number of replicas for each partition in the cluster. The replication factor cannot be greater than the number of nodes in the cluster.
replication-factor: 1 # CAMUNDA_CLUSTER_REPLICATIONFACTOR
# The number of nodes in the cluster.
size: 1 # CAMUNDA_CLUSTER_SIZE
data:
audit-log:
client:
categories: null # CAMUNDA_DATA_AUDITLOG_CLIENT_CATEGORIES
excludes: null # CAMUNDA_DATA_AUDITLOG_CLIENT_EXCLUDES
enabled: true # CAMUNDA_DATA_AUDITLOG_ENABLED
user:
categories: null # CAMUNDA_DATA_AUDITLOG_USER_CATEGORIES
excludes: null # CAMUNDA_DATA_AUDITLOG_USER_EXCLUDES
export:
# Configures the rate at which exporter positions are distributed to the followers. This is useful for fail-over and taking snapshots. The follower is able to take snapshots based on replayed and distributed export position. When a follower takes over it can recover from the snapshot, it doesn't need to replay and export everything. It can for example can start from the last exported position it has received by the distribution mechanism.
distribution-interval: null # CAMUNDA_DATA_EXPORT_DISTRIBUTIONINTERVAL
# Enable the exporters to skip record position. Allows to skip certain records by their position. This is useful for debugging or skipping a record that is preventing processing or exporting to continue. Record positions defined to skip in this definition will be skipped in all exporters. The value is a comma-separated list of records ids to skip. Whitespace is ignored.
skip-records: null # CAMUNDA_DATA_EXPORT_SKIPRECORDS
# This section allows configuring exporters
exporters: null # CAMUNDA_DATA_EXPORTERS
history-deletion:
delay-between-runs: "1s" # CAMUNDA_DATA_HISTORYDELETION_DELAYBETWEENRUNS
dependent-row-limit: 10000 # CAMUNDA_DATA_HISTORYDELETION_DEPENDENTROWLIMIT
max-delay-between-runs: "5m" # CAMUNDA_DATA_HISTORYDELETION_MAXDELAYBETWEENRUNS
queue-batch-size: 100 # CAMUNDA_DATA_HISTORYDELETION_QUEUEBATCHSIZE
primary-storage:
backup:
azure:
# Account key that is used to authenticate with Azure. Can only be used in combination with an account name. If account credentials or connection string are not provided, authentication will use credentials from the runtime environment: ...
account-key: null # CAMUNDA_DATA_PRIMARYSTORAGE_BACKUP_AZURE_ACCOUNTKEY
# Account name used to authenticate with Azure. Can only be used in combination with an account key. If account credentials or connection string are not provided, authentication will use credentials from the runtime environment: ...
account-name: null # CAMUNDA_DATA_PRIMARYSTORAGE_BACKUP_AZURE_ACCOUNTNAME
# Defines the container name where backup contents are saved.
base-path: null # CAMUNDA_DATA_PRIMARYSTORAGE_BACKUP_AZURE_BASEPATH
# The connection string configures endpoint, account name and account key all at once. If connection string or account credentials are not provided, authentication will use credentials from the runtime environment: ...
connection-string: null # CAMUNDA_DATA_PRIMARYSTORAGE_BACKUP_AZURE_CONNECTIONSTRING
# Defines the container name where backup contents are saved.
create-container: true # CAMUNDA_DATA_PRIMARYSTORAGE_BACKUP_AZURE_CREATECONTAINER
# Azure endpoint to connect to. Required unless a connection string is specified.
endpoint: null # CAMUNDA_DATA_PRIMARYSTORAGE_BACKUP_AZURE_ENDPOINT
sas-token:
# The SAS token must be of the following types: "delegation", "service" or "account".
type: null # CAMUNDA_DATA_PRIMARYSTORAGE_BACKUP_AZURE_SASTOKEN_TYPE
# Specifies the key value of the SAS token.
value: null # CAMUNDA_DATA_PRIMARYSTORAGE_BACKUP_AZURE_SASTOKEN_VALUE
checkpoint-interval: null # CAMUNDA_DATA_PRIMARYSTORAGE_BACKUP_CHECKPOINTINTERVAL
continuous: false # CAMUNDA_DATA_PRIMARYSTORAGE_BACKUP_CONTINUOUS
filesystem:
# Set the base path to store all related backup files in.
base-path: null # CAMUNDA_DATA_PRIMARYSTORAGE_BACKUP_FILESYSTEM_BASEPATH
gcs:
# Configures which authentication method is used for connecting to GCS. Can be either 'auto' or 'none'. Choosing 'auto' means that the GCS client uses application default credentials which automatically discovers appropriate credentials from the runtime environment: ... Choosing 'none' means that no authentication is attempted which is only applicable for testing with emulated GCS.
auth: "auto" # CAMUNDA_DATA_PRIMARYSTORAGE_BACKUP_GCS_AUTH
# When set, all blobs in the bucket will use this prefix. Useful for using the same bucket for multiple Zeebe clusters. In this case, basePath must be unique. Should not start or end with '/' character. Must be non-empty and not consist of only '/' characters.
base-path: null # CAMUNDA_DATA_PRIMARYSTORAGE_BACKUP_GCS_BASEPATH
# Name of the bucket where the backup will be stored. The bucket must already exist. The bucket must not be shared with other Zeebe clusters unless basePath is also set.
bucket-name: null # CAMUNDA_DATA_PRIMARYSTORAGE_BACKUP_GCS_BUCKETNAME
# When set, this overrides the host that the GCS client connects to. # By default, this is not set because the client can automatically discover the correct host to # connect to.
host: null # CAMUNDA_DATA_PRIMARYSTORAGE_BACKUP_GCS_HOST
offset: 0 # CAMUNDA_DATA_PRIMARYSTORAGE_BACKUP_OFFSET
required: false # CAMUNDA_DATA_PRIMARYSTORAGE_BACKUP_REQUIRED
retention:
window: null # CAMUNDA_DATA_PRIMARYSTORAGE_BACKUP_RETENTION_WINDOW
s3:
# Configure access credentials. If either accessKey or secretKey is not provided, the credentials will be determined as documented in ...
access-key: null # CAMUNDA_DATA_PRIMARYSTORAGE_BACKUP_S3_ACCESSKEY
# Configure a maximum duration for all S3 client API calls. Lower values will ensure that failed or slow API calls don't block other backups but may increase the risk that backups can't be stored if uploading parts of the backup takes longer than the configured timeout. See ...
api-call-timeout: "180s" # CAMUNDA_DATA_PRIMARYSTORAGE_BACKUP_S3_APICALLTIMEOUT
# When set, all objects in the bucket will use this prefix. Must be non-empty and not start or end with '/'. Useful for using the same bucket for multiple Zeebe clusters. In this case, basePath must be unique.
base-path: null # CAMUNDA_DATA_PRIMARYSTORAGE_BACKUP_S3_BASEPATH
# Name of the bucket where the backup will be stored. The bucket must be already created. The bucket must not be shared with other zeebe clusters. bucketName must not be empty.
bucket-name: null # CAMUNDA_DATA_PRIMARYSTORAGE_BACKUP_S3_BUCKETNAME
# When set to an algorithm such as 'zstd', enables compression of backup contents. When not set or set to 'none', backup content is not compressed. Enabling compression reduces the required storage space for backups in S3 but also increases the impact on CPU and disk utilization while taking a backup.
compression: null # CAMUNDA_DATA_PRIMARYSTORAGE_BACKUP_S3_COMPRESSION
# Timeout for acquiring an already-established connection from a connection pool to a remote service.
connection-acquisition-timeout: "45s" # CAMUNDA_DATA_PRIMARYSTORAGE_BACKUP_S3_CONNECTIONACQUISITIONTIMEOUT
# Configure URL endpoint for the store. If no endpoint is provided, it will be determined based on the configured region.
endpoint: null # CAMUNDA_DATA_PRIMARYSTORAGE_BACKUP_S3_ENDPOINT
# When enabled, forces the s3 client to use path-style access. By default, the client will automatically choose between path-style and virtual-hosted-style. Should only be enabled if the s3 compatible storage cannot support virtual-hosted-style. See ...
force-path-style-access: false # CAMUNDA_DATA_PRIMARYSTORAGE_BACKUP_S3_FORCEPATHSTYLEACCESS
# Maximum number of connections allowed in a connection pool. This is used to restrict the maximum number of concurrent uploads as to avoid connection timeouts when uploading backups with large/many files.
max-concurrent-connections: 50 # CAMUNDA_DATA_PRIMARYSTORAGE_BACKUP_S3_MAXCONCURRENTCONNECTIONS
# Configure AWS region. If no region is provided it will be determined as documented in ...
region: null # CAMUNDA_DATA_PRIMARYSTORAGE_BACKUP_S3_REGION
# Configure access credentials. If either accessKey or secretKey is not provided, the credentials will be determined as documented in ...
secret-key: null # CAMUNDA_DATA_PRIMARYSTORAGE_BACKUP_S3_SECRETKEY
# Enable s3 md5 plugin for legacy support
support-legacy-md5: false # CAMUNDA_DATA_PRIMARYSTORAGE_BACKUP_S3_SUPPORTLEGACYMD5
schedule: null # CAMUNDA_DATA_PRIMARYSTORAGE_BACKUP_SCHEDULE
# Set the backup store type. Supported values are [NONE, S3, GCS, AZURE, FILESYSTEM]. Default value is NONE. When NONE, no backup store is configured and no backup will be taken. Use S3 to use any S3 compatible storage (https://docs.aws.amazon.com/AmazonS3/latest/API/Type_API_Reference.html). Use GCS to use Google Cloud Storage (https://cloud.google.com/storage/) Use AZURE to use Azure Storage (https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blobs-introduction) Use FILESYSTEM to use filesystem storage Note: This configuration applies to the backup of primary storage.
store: "none" # CAMUNDA_DATA_PRIMARYSTORAGE_BACKUP_STORE
# Specify the directory in which data is stored.
directory: "data" # CAMUNDA_DATA_PRIMARYSTORAGE_DIRECTORY
disk:
free-space:
# When the free space available is less than this value, this broker rejects all client commands and pause processing.
processing: "2GB" # CAMUNDA_DATA_PRIMARYSTORAGE_DISK_FREESPACE_PROCESSING
# When the free space available is less than this value, broker stops receiving replicated events. This value must be less than `...free-space.processing`. It is recommended to configure free space large enough for at least one log segment and one snapshot. This is because a partition needs enough space to take a new snapshot to be able to compact the log segments to make disk space available again.
replication: "1GB" # CAMUNDA_DATA_PRIMARYSTORAGE_DISK_FREESPACE_REPLICATION
# Configure disk monitoring to prevent getting into a non-recoverable state due to out of disk space. When monitoring is enabled, the broker rejects commands and pause replication when the required freeSpace is not available.
monitoring-enabled: true # CAMUNDA_DATA_PRIMARYSTORAGE_DISK_MONITORINGENABLED
# Sets the interval at which the disk usage is monitored
monitoring-interval: "1s" # CAMUNDA_DATA_PRIMARYSTORAGE_DISK_MONITORINGINTERVAL
log-stream:
# The density of the log index, which determines how frequently index entries are created in the log. This value specifies the number of log entries between each index entry. A lower value increases the number of index entries (improving lookup speed but using more memory), while a higher value reduces the number of index entries (saving memory but potentially slowing lookups). Valid values: any positive integer (recommended range: 1-1000). Default: 100.
log-index-density: 100 # CAMUNDA_DATA_PRIMARYSTORAGE_LOGSTREAM_LOGINDEXDENSITY
# The size of data log segment files.
log-segment-size: "128MB" # CAMUNDA_DATA_PRIMARYSTORAGE_LOGSTREAM_LOGSEGMENTSIZE
rocks-db:
# Configures which, if any, RocksDB column family access metrics are exposed. Valid values are none (the default), and fine which exposes many metrics covering the read, write, delete and iteration latency per partition and column family.
access-metrics: "none" # CAMUNDA_DATA_PRIMARYSTORAGE_ROCKSDB_ACCESSMETRICS
# Specify custom column family options overwriting Zeebe's own defaults. WARNING: This setting requires in-depth knowledge of Zeebe's embedded database: RocksDB. The expected property key names and values are derived from RocksDB's C implementation, and are not limited to the provided examples below. Please look in RocksDB's SCM repo for the files: `cf_options.h` and `options_helper.cc`.
column-family-options: null # CAMUNDA_DATA_PRIMARYSTORAGE_ROCKSDB_COLUMNFAMILYOPTIONS
# Configures a rate limit for write I/O of RocksDB. Setting any value less than or equal to 0 will disable this, which is the default setting. Setting a rate limit on the write I/O can help achieve more stable performance by avoiding write spikes consuming all available IOPS, leading to more predictable read rates.
io-rate-bytes-per-second: 0 # CAMUNDA_DATA_PRIMARYSTORAGE_ROCKSDB_IORATEBYTESPERSECOND
# Configures how many files are kept open by RocksDB, per default it is unlimited (-1). This is a performance optimization: if you set a value greater than zero, it will keep track and cap the number of open files in the TableCache. On accessing the files it needs to look them up in the cache. You should configure this property if the maximum open files are limited on your system, or if you have thousands of files in your RocksDB state as there is a memory overhead to keeping all of them open, and setting maxOpenFiles will bound that.
max-open-files: -1 # CAMUNDA_DATA_PRIMARYSTORAGE_ROCKSDB_MAXOPENFILES
# Configures the maximum number of simultaneous write buffers/memtables RocksDB will have in memory. Normally about 2/3s of the memoryLimit is used by the write buffers, and this is shared equally by each write buffers. This means the higher maxWriteBufferNumber is, the less memory is available for each. This means you will flush less data at once, but may flush more often.
max-write-buffer-number: 6 # CAMUNDA_DATA_PRIMARYSTORAGE_ROCKSDB_MAXWRITEBUFFERNUMBER
# Configures the memory allocation strategy by RocksDB. If set to 'PARTITION', the total memory allocated to RocksDB will be the number of partitions times the configured memory limit. If the value is set to 'BROKER', the total memory allocated to RocksDB will be equal to the configured memory limit. If set to 'AUTO', Zeebe will allocate the remaining memory available to RocksDB after accounting for other components, such as the JVM heap and other native memory consumers.
memory-allocation-strategy: "auto" # CAMUNDA_DATA_PRIMARYSTORAGE_ROCKSDB_MEMORYALLOCATIONSTRATEGY
# Configures the memory limit, which can be used by RocksDB. Be aware that this setting only applies to RocksDB, which is used by the Zeebe's state management with the memory limit being shared across all partitions in a broker. The memory allocation strategy depends on the memoryAllocationStrategy setting.
memory-limit: null # CAMUNDA_DATA_PRIMARYSTORAGE_ROCKSDB_MEMORYLIMIT
# Configures how many write buffers should be full before they are merged and flushed to disk. Having a higher number here means you may flush less often, but will flush more data at once. Have a lower one means flushing more often, but flushing less data at once.
min-write-buffer-number-to-merge: 3 # CAMUNDA_DATA_PRIMARYSTORAGE_ROCKSDB_MINWRITEBUFFERNUMBERTOMERGE
# Configures if the RocksDB SST files should be partitioned based on some virtual column families. By default RocksDB will not partition the SST files, which might have influence on the compacting of certain key ranges. Enabling this option gives RocksDB some good hints how to improve compaction and reduce the write amplification. Benchmarks have show impressive results allowing to sustain performance on larger states. This setting will increase the general file count of runtime and snapshots.
sst-partitioning-enabled: true # CAMUNDA_DATA_PRIMARYSTORAGE_ROCKSDB_SSTPARTITIONINGENABLED
# Enables RocksDB statistics, which will be written to the RocksDB log file.
statistics-enabled: false # CAMUNDA_DATA_PRIMARYSTORAGE_ROCKSDB_STATISTICSENABLED
# Configures if the RocksDB write-ahead-log is used or not. By default, every write in RocksDB goes to the active write buffer and the WAL; this helps recover a RocksDB instance should it crash before the write buffer is flushed. Zeebe however only recovers from specific point-in-time snapshot, and never from a previously active RocksDB instance, which makes it a good candidate to disable the WAL. WAL is disabled by default as it can improve performance of Zeebe.
wal-disabled: true # CAMUNDA_DATA_PRIMARYSTORAGE_ROCKSDB_WALDISABLED
# Specify the directory in which runtime is stored. By default, runtime is stored in `directory` for data. If `runtime-directory` is configured, then the configured directory will be used. It will have a subdirectory for each partition to store its runtime. There is no need to store runtime in a persistent storage. This configuration allows to split runtime to another disk to optimize for performance and disk usage. Note: If runtime is another disk than the data directory, files need to be copied to data directory while taking snapshot. This may impact disk i/o or performance during snapshotting.
runtime-directory: null # CAMUNDA_DATA_PRIMARYSTORAGE_RUNTIMEDIRECTORY
secondary-storage:
# When enabled, the default exporter camundaexporter is automatically configured using the secondary-storage properties. Manual configuration of camundaexporter is not necessary. If disabled, camundaexporter will not be configured automatically, but can still be enabled through manual configuration if required. Manual configuration of camundaexporter is generally not recommended, and can result in unexpected behavior if not configured correctly.
autoconfigure-camunda-exporter: true # CAMUNDA_DATA_SECONDARYSTORAGE_AUTOCONFIGURECAMUNDAEXPORTER
elasticsearch:
backup:
database-name: null # CAMUNDA_DATA_SECONDARYSTORAGE_ELASTICSEARCH_BACKUP_DATABASENAME
batch-operation-cache:
cache-name: null # CAMUNDA_DATA_SECONDARYSTORAGE_ELASTICSEARCH_BATCHOPERATIONCACHE_CACHENAME
database-name: null # CAMUNDA_DATA_SECONDARYSTORAGE_ELASTICSEARCH_BATCHOPERATIONCACHE_DATABASENAME
batch-operations:
database-name: null # CAMUNDA_DATA_SECONDARYSTORAGE_ELASTICSEARCH_BATCHOPERATIONS_DATABASENAME
bulk:
database-name: null # CAMUNDA_DATA_SECONDARYSTORAGE_ELASTICSEARCH_BULK_DATABASENAME
# Name of the cluster
cluster-name: null # CAMUNDA_DATA_SECONDARYSTORAGE_ELASTICSEARCH_CLUSTERNAME
# The connection timeout for ES and OS connector
connection-timeout: null # CAMUNDA_DATA_SECONDARYSTORAGE_ELASTICSEARCH_CONNECTIONTIMEOUT
# Whether to create the schema automatically
create-schema: true # CAMUNDA_DATA_SECONDARYSTORAGE_ELASTICSEARCH_CREATESCHEMA
# The date format for ES and OS
date-format: "yyyy-MM-dd'T'HH:mm:ss.SSSZZ" # CAMUNDA_DATA_SECONDARYSTORAGE_ELASTICSEARCH_DATEFORMAT
decision-requirements-cache:
cache-name: null # CAMUNDA_DATA_SECONDARYSTORAGE_ELASTICSEARCH_DECISIONREQUIREMENTSCACHE_CACHENAME
database-name: null # CAMUNDA_DATA_SECONDARYSTORAGE_ELASTICSEARCH_DECISIONREQUIREMENTSCACHE_DATABASENAME
form-cache:
cache-name: null # CAMUNDA_DATA_SECONDARYSTORAGE_ELASTICSEARCH_FORMCACHE_CACHENAME
database-name: null # CAMUNDA_DATA_SECONDARYSTORAGE_ELASTICSEARCH_FORMCACHE_DATABASENAME
history:
database-name: null # CAMUNDA_DATA_SECONDARYSTORAGE_ELASTICSEARCH_HISTORY_DATABASENAME
incident-notifier:
database-name: null # CAMUNDA_DATA_SECONDARYSTORAGE_ELASTICSEARCH_INCIDENTNOTIFIER_DATABASENAME
# Prefix to apply to the indexes.
index-prefix: "" # CAMUNDA_DATA_SECONDARYSTORAGE_ELASTICSEARCH_INDEXPREFIX
# Sets the interceptor plugins
interceptor-plugins: null # CAMUNDA_DATA_SECONDARYSTORAGE_ELASTICSEARCH_INTERCEPTORPLUGINS
# How many replicas Elasticsearch uses for all indices.
number-of-replicas: 0 # CAMUNDA_DATA_SECONDARYSTORAGE_ELASTICSEARCH_NUMBEROFREPLICAS
# Per-index replica overrides.
number-of-replicas-per-index: null # CAMUNDA_DATA_SECONDARYSTORAGE_ELASTICSEARCH_NUMBEROFREPLICASPERINDEX
# How many shards Elasticsearch uses for all Tasklist indices.
number-of-shards: 1 # CAMUNDA_DATA_SECONDARYSTORAGE_ELASTICSEARCH_NUMBEROFSHARDS
# Per-index shard overrides.
number-of-shards-per-index: null # CAMUNDA_DATA_SECONDARYSTORAGE_ELASTICSEARCH_NUMBEROFSHARDSPERINDEX
# Password for the database configured as secondary storage.
password: "" # CAMUNDA_DATA_SECONDARYSTORAGE_ELASTICSEARCH_PASSWORD
post-export:
database-name: null # CAMUNDA_DATA_SECONDARYSTORAGE_ELASTICSEARCH_POSTEXPORT_DATABASENAME
process-cache:
cache-name: null # CAMUNDA_DATA_SECONDARYSTORAGE_ELASTICSEARCH_PROCESSCACHE_CACHENAME
database-name: null # CAMUNDA_DATA_SECONDARYSTORAGE_ELASTICSEARCH_PROCESSCACHE_DATABASENAME
security:
database-name: null # CAMUNDA_DATA_SECONDARYSTORAGE_ELASTICSEARCH_SECURITY_DATABASENAME
# The socket timeout for ES and OS connector
socket-timeout: null # CAMUNDA_DATA_SECONDARYSTORAGE_ELASTICSEARCH_SOCKETTIMEOUT
# Template priority for index templates.
template-priority: null # CAMUNDA_DATA_SECONDARYSTORAGE_ELASTICSEARCH_TEMPLATEPRIORITY
# Endpoint for the database configured as secondary storage.
url: "http://localhost:9200" # CAMUNDA_DATA_SECONDARYSTORAGE_ELASTICSEARCH_URL
# Username for the database configured as secondary storage.
username: "" # CAMUNDA_DATA_SECONDARYSTORAGE_ELASTICSEARCH_USERNAME
# Variable size threshold for the database configured as secondary storage.
variable-size-threshold: 8191 # CAMUNDA_DATA_SECONDARYSTORAGE_ELASTICSEARCH_VARIABLESIZETHRESHOLD
opensearch:
backup:
database-name: null # CAMUNDA_DATA_SECONDARYSTORAGE_OPENSEARCH_BACKUP_DATABASENAME
batch-operation-cache:
cache-name: null # CAMUNDA_DATA_SECONDARYSTORAGE_OPENSEARCH_BATCHOPERATIONCACHE_CACHENAME
database-name: null # CAMUNDA_DATA_SECONDARYSTORAGE_OPENSEARCH_BATCHOPERATIONCACHE_DATABASENAME
batch-operations:
database-name: null # CAMUNDA_DATA_SECONDARYSTORAGE_OPENSEARCH_BATCHOPERATIONS_DATABASENAME
bulk:
database-name: null # CAMUNDA_DATA_SECONDARYSTORAGE_OPENSEARCH_BULK_DATABASENAME
# Name of the cluster
cluster-name: null # CAMUNDA_DATA_SECONDARYSTORAGE_OPENSEARCH_CLUSTERNAME
# The connection timeout for ES and OS connector
connection-timeout: null # CAMUNDA_DATA_SECONDARYSTORAGE_OPENSEARCH_CONNECTIONTIMEOUT
# Whether to create the schema automatically
create-schema: true # CAMUNDA_DATA_SECONDARYSTORAGE_OPENSEARCH_CREATESCHEMA
# The date format for ES and OS
date-format: "yyyy-MM-dd'T'HH:mm:ss.SSSZZ" # CAMUNDA_DATA_SECONDARYSTORAGE_OPENSEARCH_DATEFORMAT
decision-requirements-cache:
cache-name: null # CAMUNDA_DATA_SECONDARYSTORAGE_OPENSEARCH_DECISIONREQUIREMENTSCACHE_CACHENAME
database-name: null # CAMUNDA_DATA_SECONDARYSTORAGE_OPENSEARCH_DECISIONREQUIREMENTSCACHE_DATABASENAME
form-cache:
cache-name: null # CAMUNDA_DATA_SECONDARYSTORAGE_OPENSEARCH_FORMCACHE_CACHENAME
database-name: null # CAMUNDA_DATA_SECONDARYSTORAGE_OPENSEARCH_FORMCACHE_DATABASENAME
history:
database-name: null # CAMUNDA_DATA_SECONDARYSTORAGE_OPENSEARCH_HISTORY_DATABASENAME
incident-notifier:
database-name: null # CAMUNDA_DATA_SECONDARYSTORAGE_OPENSEARCH_INCIDENTNOTIFIER_DATABASENAME
# Prefix to apply to the indexes.
index-prefix: "" # CAMUNDA_DATA_SECONDARYSTORAGE_OPENSEARCH_INDEXPREFIX
# Sets the interceptor plugins
interceptor-plugins: null # CAMUNDA_DATA_SECONDARYSTORAGE_OPENSEARCH_INTERCEPTORPLUGINS
# How many replicas Elasticsearch uses for all indices.
number-of-replicas: 0 # CAMUNDA_DATA_SECONDARYSTORAGE_OPENSEARCH_NUMBEROFREPLICAS
# Per-index replica overrides.
number-of-replicas-per-index: null # CAMUNDA_DATA_SECONDARYSTORAGE_OPENSEARCH_NUMBEROFREPLICASPERINDEX
# How many shards Elasticsearch uses for all Tasklist indices.
number-of-shards: 1 # CAMUNDA_DATA_SECONDARYSTORAGE_OPENSEARCH_NUMBEROFSHARDS
# Per-index shard overrides.
number-of-shards-per-index: null # CAMUNDA_DATA_SECONDARYSTORAGE_OPENSEARCH_NUMBEROFSHARDSPERINDEX
# Password for the database configured as secondary storage.
password: "" # CAMUNDA_DATA_SECONDARYSTORAGE_OPENSEARCH_PASSWORD
post-export:
database-name: null # CAMUNDA_DATA_SECONDARYSTORAGE_OPENSEARCH_POSTEXPORT_DATABASENAME
process-cache:
cache-name: null # CAMUNDA_DATA_SECONDARYSTORAGE_OPENSEARCH_PROCESSCACHE_CACHENAME
database-name: null # CAMUNDA_DATA_SECONDARYSTORAGE_OPENSEARCH_PROCESSCACHE_DATABASENAME
security:
database-name: null # CAMUNDA_DATA_SECONDARYSTORAGE_OPENSEARCH_SECURITY_DATABASENAME
# The socket timeout for ES and OS connector
socket-timeout: null # CAMUNDA_DATA_SECONDARYSTORAGE_OPENSEARCH_SOCKETTIMEOUT
# Template priority for index templates.
template-priority: null # CAMUNDA_DATA_SECONDARYSTORAGE_OPENSEARCH_TEMPLATEPRIORITY
# Endpoint for the database configured as secondary storage.
url: "http://localhost:9200" # CAMUNDA_DATA_SECONDARYSTORAGE_OPENSEARCH_URL
# Username for the database configured as secondary storage.
username: "" # CAMUNDA_DATA_SECONDARYSTORAGE_OPENSEARCH_USERNAME
# Variable size threshold for the database configured as secondary storage.
variable-size-threshold: 8191 # CAMUNDA_DATA_SECONDARYSTORAGE_OPENSEARCH_VARIABLESIZETHRESHOLD
rdbms:
# Batch operation cache configuration. Defines the size of the batch operation cache.
batch-operation-cache: null # CAMUNDA_DATA_SECONDARYSTORAGE_RDBMS_BATCHOPERATIONCACHE
# The number of batch operation items to insert in a single batched SQL when creating the items for a batch operation. This is only relevant when exportBatchOperationItemsOnCreation is set to true.
batch-operation-item-insert-block-size: null # CAMUNDA_DATA_SECONDARYSTORAGE_RDBMS_BATCHOPERATIONITEMINSERTBLOCKSIZE
# If true, batch operation items are exported to the database when the batch operation is created (status = ACTIVE). If false, the items are created on demand when they have been processed. When set to true, this ensures that the items are available when the batch operation is created, but it may lead to a delay in the creation of the batch operation if there are many items to create.
export-batch-operation-items-on-creation: null # CAMUNDA_DATA_SECONDARYSTORAGE_RDBMS_EXPORTBATCHOPERATIONITEMSONCREATION
# The interval at which the exporters execution queue is flushed.
flush-interval: null # CAMUNDA_DATA_SECONDARYSTORAGE_RDBMS_FLUSHINTERVAL
history:
# The default time to live for cancel process instance batch operations. Specified in Java Duration format.
batch-operation-cancel-process-instance-history-t-t-l: null # CAMUNDA_DATA_SECONDARYSTORAGE_RDBMS_HISTORY_BATCHOPERATIONCANCELPROCESSINSTANCEHISTORYTTL
# The default time to live for migrate process instance batch operations. Specified in Java Duration format.
batch-operation-migrate-process-instance-history-t-t-l: null # CAMUNDA_DATA_SECONDARYSTORAGE_RDBMS_HISTORY_BATCHOPERATIONMIGRATEPROCESSINSTANCEHISTORYTTL
# The default time to live for modify process instance batch operations. Specified in Java Duration format.
batch-operation-modify-process-instance-history-t-t-l: null # CAMUNDA_DATA_SECONDARYSTORAGE_RDBMS_HISTORY_BATCHOPERATIONMODIFYPROCESSINSTANCEHISTORYTTL
# The default time to live for resolve incident batch operations. Specified in Java Duration format.
batch-operation-resolve-incident-history-t-t-l: null # CAMUNDA_DATA_SECONDARYSTORAGE_RDBMS_HISTORY_BATCHOPERATIONRESOLVEINCIDENTHISTORYTTL
# The default time to live for decision instances without a process instance. Specified in Java Duration format.
decision-instance-t-t-l: null # CAMUNDA_DATA_SECONDARYSTORAGE_RDBMS_HISTORY_DECISIONINSTANCETTL
# The default time to live for all batch operations. Specified in Java Duration format.
default-batch-operation-history-t-t-l: null # CAMUNDA_DATA_SECONDARYSTORAGE_RDBMS_HISTORY_DEFAULTBATCHOPERATIONHISTORYTTL
# The default time to live for all camunda entities that support history time to live. Specified in Java Duration format.
default-history-t-t-l: null # CAMUNDA_DATA_SECONDARYSTORAGE_RDBMS_HISTORY_DEFAULTHISTORYTTL
# The number of history records to delete in one batch.
history-cleanup-batch-size: null # CAMUNDA_DATA_SECONDARYSTORAGE_RDBMS_HISTORY_HISTORYCLEANUPBATCHSIZE
# The max interval between two history cleanup runs. This will be reached when the system is constantly finding no data to clean up. Specified in Java Duration format.
max-history-cleanup-interval: null # CAMUNDA_DATA_SECONDARYSTORAGE_RDBMS_HISTORY_MAXHISTORYCLEANUPINTERVAL
# The min interval between two history cleanup runs. This will be reached when the system is constantly finding data to clean up. Specified in Java Duration format.
min-history-cleanup-interval: null # CAMUNDA_DATA_SECONDARYSTORAGE_RDBMS_HISTORY_MINHISTORYCLEANUPINTERVAL
# Interval how often usage metrics cleanup is performed. Specified in Java Duration format.
usage-metrics-cleanup: null # CAMUNDA_DATA_SECONDARYSTORAGE_RDBMS_HISTORY_USAGEMETRICSCLEANUP
# The default time to live for usage metrics. Specified in Java Duration format.
usage-metrics-t-t-l: null # CAMUNDA_DATA_SECONDARYSTORAGE_RDBMS_HISTORY_USAGEMETRICSTTL
metrics:
# The duration for which the table row count metrics are cached before being refreshed from the database. This helps avoid performance impact on the database.
table-row-count-cache-duration: "5m" # CAMUNDA_DATA_SECONDARYSTORAGE_RDBMS_METRICS_TABLEROWCOUNTCACHEDURATION
# Password for the database configured as secondary storage.
password: "" # CAMUNDA_DATA_SECONDARYSTORAGE_RDBMS_PASSWORD
# The prefix to use for all database artifacts like tables, indexes etc.
prefix: null # CAMUNDA_DATA_SECONDARYSTORAGE_RDBMS_PREFIX
# Process definition cache configuration. Defines the size of the process definition cache.
process-cache: null # CAMUNDA_DATA_SECONDARYSTORAGE_RDBMS_PROCESSCACHE
# The maximum memory (in MB) that the execution queue can consume before it is flushed to the database. This helps prevent OOM when processing large processes with large variables.
queue-memory-limit: null # CAMUNDA_DATA_SECONDARYSTORAGE_RDBMS_QUEUEMEMORYLIMIT
# The maximum size of the exporters execution queue before it is flushed to the database.
queue-size: null # CAMUNDA_DATA_SECONDARYSTORAGE_RDBMS_QUEUESIZE
# Endpoint for the database configured as secondary storage.
url: "http://localhost:9200" # CAMUNDA_DATA_SECONDARYSTORAGE_RDBMS_URL
# Username for the database configured as secondary storage.
username: "" # CAMUNDA_DATA_SECONDARYSTORAGE_RDBMS_USERNAME
# Configuration for retention behavior
retention: null # CAMUNDA_DATA_SECONDARYSTORAGE_RETENTION
# Determines the type of the secondary storage database.
type: null # CAMUNDA_DATA_SECONDARYSTORAGE_TYPE
# How often we take snapshots of streams (time unit)
snapshot-period: "5m" # CAMUNDA_DATA_SNAPSHOTPERIOD
database:
aws-enabled: null # CAMUNDA_DATABASE_AWSENABLED
cluster-name: null # CAMUNDA_DATABASE_CLUSTERNAME
connect-timeout: null # CAMUNDA_DATABASE_CONNECTTIMEOUT
date-format: null # CAMUNDA_DATABASE_DATEFORMAT
field-date-format: null # CAMUNDA_DATABASE_FIELDDATEFORMAT
index-prefix: null # CAMUNDA_DATABASE_INDEXPREFIX
index:
number-of-replicas: null # CAMUNDA_DATABASE_INDEX_NUMBEROFREPLICAS
number-of-shards: null # CAMUNDA_DATABASE_INDEX_NUMBEROFSHARDS
replicas-by-index-name: null # CAMUNDA_DATABASE_INDEX_REPLICASBYINDEXNAME
shards-by-index-name: null # CAMUNDA_DATABASE_INDEX_SHARDSBYINDEXNAME
template-priority: null # CAMUNDA_DATABASE_INDEX_TEMPLATEPRIORITY
variable-size-threshold: null # CAMUNDA_DATABASE_INDEX_VARIABLESIZETHRESHOLD
interceptor-plugins: null # CAMUNDA_DATABASE_INTERCEPTORPLUGINS
password: null # CAMUNDA_DATABASE_PASSWORD
retention:
apply-policy-job-interval: null # CAMUNDA_DATABASE_RETENTION_APPLYPOLICYJOBINTERVAL
enabled: null # CAMUNDA_DATABASE_RETENTION_ENABLED
minimum-age: null # CAMUNDA_DATABASE_RETENTION_MINIMUMAGE
policy-name: null # CAMUNDA_DATABASE_RETENTION_POLICYNAME
usage-metrics-minimum-age: null # CAMUNDA_DATABASE_RETENTION_USAGEMETRICSMINIMUMAGE
usage-metrics-policy-name: null # CAMUNDA_DATABASE_RETENTION_USAGEMETRICSPOLICYNAME
security: null # CAMUNDA_DATABASE_SECURITY
socket-timeout: null # CAMUNDA_DATABASE_SOCKETTIMEOUT
url: null # CAMUNDA_DATABASE_URL
username: null # CAMUNDA_DATABASE_USERNAME
monitoring:
jfr: null # CAMUNDA_MONITORING_JFR
metrics:
# Controls whether to collect metrics about actor usage such as actor job execution latencies
actor: true # CAMUNDA_MONITORING_METRICS_ACTOR
# Enable exporter execution metrics
enable-exporter-execution-metrics: false # CAMUNDA_MONITORING_METRICS_ENABLEEXPORTEREXECUTIONMETRICS
operate:
backup:
incomplete-check-timeout-in-seconds: null # CAMUNDA_OPERATE_BACKUP_INCOMPLETECHECKTIMEOUTINSECONDS
repository-name: null # CAMUNDA_OPERATE_BACKUP_REPOSITORYNAME
snapshot-timeout: null # CAMUNDA_OPERATE_BACKUP_SNAPSHOTTIMEOUT
batch-operation-max-size: null # CAMUNDA_OPERATE_BATCHOPERATIONMAXSIZE
cloud:
cluster-id: null # CAMUNDA_OPERATE_CLOUD_CLUSTERID
console-url: null # CAMUNDA_OPERATE_CLOUD_CONSOLEURL
mixpanel-a-p-i-host: null # CAMUNDA_OPERATE_CLOUD_MIXPANELAPIHOST
mixpanel-token: null # CAMUNDA_OPERATE_CLOUD_MIXPANELTOKEN
organization-id: null # CAMUNDA_OPERATE_CLOUD_ORGANIZATIONID
permission-audience: null # CAMUNDA_OPERATE_CLOUD_PERMISSIONAUDIENCE
permission-url: null # CAMUNDA_OPERATE_CLOUD_PERMISSIONURL
database: null # CAMUNDA_OPERATE_DATABASE
elasticsearch:
batch-size: null # CAMUNDA_OPERATE_ELASTICSEARCH_BATCHSIZE
cluster-name: null # CAMUNDA_OPERATE_ELASTICSEARCH_CLUSTERNAME
connect-timeout: null # CAMUNDA_OPERATE_ELASTICSEARCH_CONNECTTIMEOUT
create-schema: null # CAMUNDA_OPERATE_ELASTICSEARCH_CREATESCHEMA
date-format: null # CAMUNDA_OPERATE_ELASTICSEARCH_DATEFORMAT
els-date-format: null # CAMUNDA_OPERATE_ELASTICSEARCH_ELSDATEFORMAT
health-check-enabled: null # CAMUNDA_OPERATE_ELASTICSEARCH_HEALTHCHECKENABLED
index-prefix: null # CAMUNDA_OPERATE_ELASTICSEARCH_INDEXPREFIX
interceptor-plugins: null # CAMUNDA_OPERATE_ELASTICSEARCH_INTERCEPTORPLUGINS
password: null # CAMUNDA_OPERATE_ELASTICSEARCH_PASSWORD
socket-timeout: null # CAMUNDA_OPERATE_ELASTICSEARCH_SOCKETTIMEOUT
ssl:
certificate-path: null # CAMUNDA_OPERATE_ELASTICSEARCH_SSL_CERTIFICATEPATH
self-signed: null # CAMUNDA_OPERATE_ELASTICSEARCH_SSL_SELFSIGNED
verify-hostname: null # CAMUNDA_OPERATE_ELASTICSEARCH_SSL_VERIFYHOSTNAME
url: null # CAMUNDA_OPERATE_ELASTICSEARCH_URL
username: null # CAMUNDA_OPERATE_ELASTICSEARCH_USERNAME
enterprise: null # CAMUNDA_OPERATE_ENTERPRISE
identity:
audience: null # CAMUNDA_OPERATE_IDENTITY_AUDIENCE
base-url: null # CAMUNDA_OPERATE_IDENTITY_BASEURL
client-id: null # CAMUNDA_OPERATE_IDENTITY_CLIENTID
client-secret: null # CAMUNDA_OPERATE_IDENTITY_CLIENTSECRET
issuer-backend-url: null # CAMUNDA_OPERATE_IDENTITY_ISSUERBACKENDURL
issuer-url: null # CAMUNDA_OPERATE_IDENTITY_ISSUERURL
redirect-root-url: null # CAMUNDA_OPERATE_IDENTITY_REDIRECTROOTURL
resource-permissions-update-period: null # CAMUNDA_OPERATE_IDENTITY_RESOURCEPERMISSIONSUPDATEPERIOD
importer:
read-archived-parents: null # CAMUNDA_OPERATE_IMPORTER_READARCHIVEDPARENTS
retry-reading-parents: null # CAMUNDA_OPERATE_IMPORTER_RETRYREADINGPARENTS
variable-size-threshold: null # CAMUNDA_OPERATE_IMPORTER_VARIABLESIZETHRESHOLD
opensearch:
aws-enabled: null # CAMUNDA_OPERATE_OPENSEARCH_AWSENABLED
batch-size: null # CAMUNDA_OPERATE_OPENSEARCH_BATCHSIZE
cluster-name: null # CAMUNDA_OPERATE_OPENSEARCH_CLUSTERNAME
connect-timeout: null # CAMUNDA_OPERATE_OPENSEARCH_CONNECTTIMEOUT
create-schema: null # CAMUNDA_OPERATE_OPENSEARCH_CREATESCHEMA
date-format: null # CAMUNDA_OPERATE_OPENSEARCH_DATEFORMAT
health-check-enabled: null # CAMUNDA_OPERATE_OPENSEARCH_HEALTHCHECKENABLED
index-prefix: null # CAMUNDA_OPERATE_OPENSEARCH_INDEXPREFIX
interceptor-plugins: null # CAMUNDA_OPERATE_OPENSEARCH_INTERCEPTORPLUGINS
os-date-format: null # CAMUNDA_OPERATE_OPENSEARCH_OSDATEFORMAT
password: null # CAMUNDA_OPERATE_OPENSEARCH_PASSWORD
socket-timeout: null # CAMUNDA_OPERATE_OPENSEARCH_SOCKETTIMEOUT
ssl:
certificate-path: null # CAMUNDA_OPERATE_OPENSEARCH_SSL_CERTIFICATEPATH
self-signed: null # CAMUNDA_OPERATE_OPENSEARCH_SSL_SELFSIGNED
verify-hostname: null # CAMUNDA_OPERATE_OPENSEARCH_SSL_VERIFYHOSTNAME
url: null # CAMUNDA_OPERATE_OPENSEARCH_URL
username: null # CAMUNDA_OPERATE_OPENSEARCH_USERNAME
operation-executor:
batch-size: null # CAMUNDA_OPERATE_OPERATIONEXECUTOR_BATCHSIZE
deletion-batch-size: null # CAMUNDA_OPERATE_OPERATIONEXECUTOR_DELETIONBATCHSIZE
executor-enabled: null # CAMUNDA_OPERATE_OPERATIONEXECUTOR_EXECUTORENABLED
lock-timeout: null # CAMUNDA_OPERATE_OPERATIONEXECUTOR_LOCKTIMEOUT
queue-size: null # CAMUNDA_OPERATE_OPERATIONEXECUTOR_QUEUESIZE
threads-count: null # CAMUNDA_OPERATE_OPERATIONEXECUTOR_THREADSCOUNT
worker-id: null # CAMUNDA_OPERATE_OPERATIONEXECUTOR_WORKERID
password: null # CAMUNDA_OPERATE_PASSWORD
rfc3339-api-date-format: null # CAMUNDA_OPERATE_RFC3339APIDATEFORMAT
roles: null # CAMUNDA_OPERATE_ROLES
tasklist-url: null # CAMUNDA_OPERATE_TASKLISTURL
version: null # CAMUNDA_OPERATE_VERSION
webapp-enabled: null # CAMUNDA_OPERATE_WEBAPPENABLED
zeebe:
certificate-path: null # CAMUNDA_OPERATE_ZEEBE_CERTIFICATEPATH
gateway-address: null # CAMUNDA_OPERATE_ZEEBE_GATEWAYADDRESS
secure: null # CAMUNDA_OPERATE_ZEEBE_SECURE
processing:
# While disabled, checking the Time-To-Live of buffered messages blocks all other executions that occur on the stream processor, including process execution and job activation/completion. When enabled, the Message TTL Checker will run asynchronous to the Engine's stream processor. This helps improve throughput and process latency in use cases that publish many messages with a non-zero TTL. We recommend testing this feature in a non-production environment before enabling it in production.
enable-async-message-ttl-checker: false # CAMUNDA_PROCESSING_ENABLEASYNCMESSAGETTLCHECKER
# Allows scheduled processing tasks such as checking for timed-out jobs to run concurrently to regular processing. This is a performance optimization to ensure that processing is not interrupted by higher than usual workload for any of the scheduled tasks. This should only be disabled in case of bugs, for example if one of the scheduled tasks is not safe to run concurrently to regular processing. This replaces the deprecated experimental settings that enable async scheduling for specific tasks only, for example `enableMessageTTLCheckerAsync`. When `enableAsyncScheduledTasks` is enabled (which it is by default), the deprecated settings take no effect. When `enableAsyncScheduledTasks` is disabled, scheduled tasks are only run async if explicitly enabled by the deprecated setting.
enable-async-scheduled-tasks: true # CAMUNDA_PROCESSING_ENABLEASYNCSCHEDULEDTASKS
# While disabled, checking for due timers blocks all other executions that occur on the stream processor, including process execution and job activation/completion. When enabled, the Due Date Checker will run asynchronous to the Engine's stream processor. This helps improve throughput and process latency when there are a lot of timers. We recommend testing this feature in a non-production environment before enabling it in production.
enable-async-timer-duedate-checker: false # CAMUNDA_PROCESSING_ENABLEASYNCTIMERDUEDATECHECKER
# Configures if inserting or updating key-value pairs on RocksDB should check that foreign keys exist.
enable-foreign-key-checks: false # CAMUNDA_PROCESSING_ENABLEFOREIGNKEYCHECKS
# Controls whether the full message body is included in the follow-up event when a message expires. When enabled, the system appends the entire message payload during expiration. This is useful in environments where full visibility into expired messages is needed. Such that full message details can be exported by configuring the exporter filtering to allow `Message.EXPIRED` events. However, including the full message body increases the size of each follow-up event. For large messages (e.g., ~100KB), this may lead to batch size limits being exceeded earlier. As a result, fewer messages may be expired per batch (e.g., only 40 instead of more), which requires multiple checker runs to process the full batch. Please be aware that this may introduce performance regressions or cause the expired message state to grow more quickly over time. To maintain backward compatibility and avoid performance degradation, the default value is false, meaning message bodies will not be appended unless explicitly enabled.
enable-message-body-on-expired: false # CAMUNDA_PROCESSING_ENABLEMESSAGEBODYONEXPIRED
# Configures if the basic operations on RocksDB, such as inserting or deleting key-value pairs, should check preconditions, for example that a key does not already exist when inserting.
enable-preconditions-check: false # CAMUNDA_PROCESSING_ENABLEPRECONDITIONSCHECK
enable-straightthrough-processing-loop-detector: true # CAMUNDA_PROCESSING_ENABLESTRAIGHTTHROUGHPROCESSINGLOOPDETECTOR
# Changes the DueDateTimerChecker to give yield to other processing steps in situations where it has many (i.e. millions of) timers to process. If set to false (default) the DueDateTimerChecker will activate all due timers. In the worst case, this can lead to the node being blocked for indefinite amount of time, being subsequently flagged as unhealthy. Currently, there is no known way to recover from this situation If set to true, the DueDateTimerChecker will give yield to other processing steps. This avoids the worst case described above. However, under consistent high load it may happen that the activated timers will fall behind real time, if more timers become due than can be activated during a certain time period.
enable-yielding-due-date-checker: true # CAMUNDA_PROCESSING_ENABLEYIELDINGDUEDATECHECKER
engine:
batch-operations:
# The interval at which the batch operation scheduler runs. Defaults to {@link EngineConfiguration#DEFAULT_BATCH_OPERATION_SCHEDULER_INTERVAL}.
scheduler-interval: null # CAMUNDA_PROCESSING_ENGINE_BATCHOPERATIONS_SCHEDULERINTERVAL
distribution:
# Allows configuring the maximum backoff duration for command redistribution retries. The retry interval is doubled after each retry until it reaches this maximum duration.
max-backoff-duration: null # CAMUNDA_PROCESSING_ENGINE_DISTRIBUTION_MAXBACKOFFDURATION
# Allows configuring command redistribution retry interval. This is the initial interval used when retrying command distributions that have not been acknowledged.
redistribution-interval: null # CAMUNDA_PROCESSING_ENGINE_DISTRIBUTION_REDISTRIBUTIONINTERVAL
flow-control:
request:
aimd:
# The backoff ratio
backoff-ratio: 0.9 # CAMUNDA_PROCESSING_FLOWCONTROL_REQUEST_AIMD_BACKOFFRATIO
# The initial limit
initial-limit: 100 # CAMUNDA_PROCESSING_FLOWCONTROL_REQUEST_AIMD_INITIALLIMIT
# The maximum limit
max-limit: 1000 # CAMUNDA_PROCESSING_FLOWCONTROL_REQUEST_AIMD_MAXLIMIT
# The minimum limit
min-limit: 1 # CAMUNDA_PROCESSING_FLOWCONTROL_REQUEST_AIMD_MINLIMIT
# The request timeout
request-timeout: "200ms" # CAMUNDA_PROCESSING_FLOWCONTROL_REQUEST_AIMD_REQUESTTIMEOUT
# The algorithm to use for limiting (aimd, fixed, vegas, gradient, gradient2, legacy-vegas)
algorithm: "aimd" # CAMUNDA_PROCESSING_FLOWCONTROL_REQUEST_ALGORITHM
# Enable request limit
enabled: true # CAMUNDA_PROCESSING_FLOWCONTROL_REQUEST_ENABLED
fixed:
# The limit
limit: 20 # CAMUNDA_PROCESSING_FLOWCONTROL_REQUEST_FIXED_LIMIT
gradient:
# The initial limit
initial-limit: 20 # CAMUNDA_PROCESSING_FLOWCONTROL_REQUEST_GRADIENT_INITIALLIMIT
# The minimum limit
min-limit: 10 # CAMUNDA_PROCESSING_FLOWCONTROL_REQUEST_GRADIENT_MINLIMIT
# The RTT tolerance
rtt-tolerance: 2 # CAMUNDA_PROCESSING_FLOWCONTROL_REQUEST_GRADIENT_RTTTOLERANCE
gradient2:
# The initial limit
initial-limit: 20 # CAMUNDA_PROCESSING_FLOWCONTROL_REQUEST_GRADIENT2_INITIALLIMIT
# The long window
long-window: 600 # CAMUNDA_PROCESSING_FLOWCONTROL_REQUEST_GRADIENT2_LONGWINDOW
# The minimum limit
min-limit: 10 # CAMUNDA_PROCESSING_FLOWCONTROL_REQUEST_GRADIENT2_MINLIMIT
# The RTT tolerance
rtt-tolerance: 2 # CAMUNDA_PROCESSING_FLOWCONTROL_REQUEST_GRADIENT2_RTTTOLERANCE
legacy-vegas:
# The alpha limit
alpha-limit: 0.7 # CAMUNDA_PROCESSING_FLOWCONTROL_REQUEST_LEGACYVEGAS_ALPHALIMIT
# The beta limit
beta-limit: 0.95 # CAMUNDA_PROCESSING_FLOWCONTROL_REQUEST_LEGACYVEGAS_BETALIMIT
# The initial limit
initial-limit: 1024 # CAMUNDA_PROCESSING_FLOWCONTROL_REQUEST_LEGACYVEGAS_INITIALLIMIT
# The max concurrency
max-concurrency: null # CAMUNDA_PROCESSING_FLOWCONTROL_REQUEST_LEGACYVEGAS_MAXCONCURRENCY
vegas:
# The alpha value
alpha: 3 # CAMUNDA_PROCESSING_FLOWCONTROL_REQUEST_VEGAS_ALPHA
# The beta value
beta: 6 # CAMUNDA_PROCESSING_FLOWCONTROL_REQUEST_VEGAS_BETA
# The initial limit
initial-limit: 20 # CAMUNDA_PROCESSING_FLOWCONTROL_REQUEST_VEGAS_INITIALLIMIT
# Use windowed limit
windowed: true # CAMUNDA_PROCESSING_FLOWCONTROL_REQUEST_WINDOWED
write:
# Enable rate limit
enabled: false # CAMUNDA_PROCESSING_FLOWCONTROL_WRITE_ENABLED
# Sets the maximum number of records written per second
limit: 0 # CAMUNDA_PROCESSING_FLOWCONTROL_WRITE_LIMIT
# Sets the ramp up time, for example 10s
ramp-up: 0 # CAMUNDA_PROCESSING_FLOWCONTROL_WRITE_RAMPUP
throttle:
# When exporting is a bottleneck, the write rate is throttled to keep the backlog at this value.
acceptable-backlog: 100000 # CAMUNDA_PROCESSING_FLOWCONTROL_WRITE_THROTTLE_ACCEPTABLEBACKLOG
# Enable throttling. If enabled, throttle the write rate based on exporting backlog
enabled: false # CAMUNDA_PROCESSING_FLOWCONTROL_WRITE_THROTTLE_ENABLED
# Even when exporting is fully blocked, always allow this many writes per second
minimum-limit: 100 # CAMUNDA_PROCESSING_FLOWCONTROL_WRITE_THROTTLE_MINIMUMLIMIT
# How often to adjust the throttling
resolution: "15s" # CAMUNDA_PROCESSING_FLOWCONTROL_WRITE_THROTTLE_RESOLUTION
# Sets the maximum number of commands that processed within one batch. The processor will process until no more follow up commands are created by the initial command or the configured limit is reached. By default, up to 100 commands are processed in one batch. Can be set to 1 to disable batch processing. Must be a positive integer number. Note that the resulting batch size will contain more entries than this limit because it includes follow up events. When resulting batch size is too large (see maxMessageSize), processing will be rolled back and retried with a smaller maximum batch size. Lowering the command limit can reduce the frequency of rollback and retry.
max-commands-in-batch: 100 # CAMUNDA_PROCESSING_MAXCOMMANDSINBATCH
# Configures the rate at which a partition leader checks for expired scheduled tasks such as the due date checker. The default value is 1 second. Use a lower interval to potentially decrease delays between requested and actual execution of scheduled tasks. Using a low interval will result in unnecessary load while idle. We recommend to benchmark any changes to this setting.
scheduled-tasks-check-interval: "1s" # CAMUNDA_PROCESSING_SCHEDULEDTASKSCHECKINTERVAL
# Allows to skip certain commands by their position. This is useful for debugging and data recovery. It is not recommended to use this in production. The value is a comma-separated list of positions to skip. Whitespace is ignored.
skip-positions: null # CAMUNDA_PROCESSING_SKIPPOSITIONS
rest:
api-executor:
core-pool-size-multiplier: null # CAMUNDA_REST_APIEXECUTOR_COREPOOLSIZEMULTIPLIER
keep-alive-seconds: null # CAMUNDA_REST_APIEXECUTOR_KEEPALIVESECONDS
max-pool-size-multiplier: null # CAMUNDA_REST_APIEXECUTOR_MAXPOOLSIZEMULTIPLIER
queue-capacity: null # CAMUNDA_REST_APIEXECUTOR_QUEUECAPACITY
process-cache:
expiration-idle-millis: null # CAMUNDA_REST_PROCESSCACHE_EXPIRATIONIDLEMILLIS
max-size: null # CAMUNDA_REST_PROCESSCACHE_MAXSIZE
system:
actor:
idle:
max-park-period: null # CAMUNDA_SYSTEM_ACTOR_IDLE_MAXPARKPERIOD
max-spins: null # CAMUNDA_SYSTEM_ACTOR_IDLE_MAXSPINS
max-yields: null # CAMUNDA_SYSTEM_ACTOR_IDLE_MAXYIELDS
min-park-period: null # CAMUNDA_SYSTEM_ACTOR_IDLE_MINPARKPERIOD
# Controls whether the system clock or mutable one. When enabled, time progression can be controlled programmatically for testing purposes.
clock-controlled: false # CAMUNDA_SYSTEM_CLOCKCONTROLLED
# Controls the number of non-blocking CPU threads to be used. WARNING: You should never specify a value that is larger than the number of physical cores available. Good practice is to leave 1-2 cores for IO threads and the operating system (it has to run somewhere). For example, when running Zeebe on a machine which has 4 cores, a good value would be 2.
cpu-thread-count: 2 # CAMUNDA_SYSTEM_CPUTHREADCOUNT
# Controls the number of io threads to be used. These threads are used for workloads that write data to disk. While writing, these threads are blocked which means that they yield the CPU.
io-thread-count: 2 # CAMUNDA_SYSTEM_IOTHREADCOUNT
restore:
ignore-files-in-target: "lost+found" # CAMUNDA_SYSTEM_RESTORE_IGNOREFILESINTARGET
validate-config: true # CAMUNDA_SYSTEM_RESTORE_VALIDATECONFIG
upgrade:
# Toggles the version check restriction, used for migration. Useful for testing migration logic on snapshot or alpha versions. Default: True, means it is not allowed to migrate to incompatible version like: SNAPSHOT or alpha.
enable-version-check: null # CAMUNDA_SYSTEM_UPGRADE_ENABLEVERSIONCHECK
tasklist:
auth0:
claim-name: null # CAMUNDA_TASKLIST_AUTH0_CLAIMNAME
client-id: null # CAMUNDA_TASKLIST_AUTH0_CLIENTID
client-secret: null # CAMUNDA_TASKLIST_AUTH0_CLIENTSECRET
domain: null # CAMUNDA_TASKLIST_AUTH0_DOMAIN
email-key: null # CAMUNDA_TASKLIST_AUTH0_EMAILKEY
name-key: null # CAMUNDA_TASKLIST_AUTH0_NAMEKEY
organization: null # CAMUNDA_TASKLIST_AUTH0_ORGANIZATION
organizations-key: null # CAMUNDA_TASKLIST_AUTH0_ORGANIZATIONSKEY
backup:
repository-name: null # CAMUNDA_TASKLIST_BACKUP_REPOSITORYNAME
client:
audience: null # CAMUNDA_TASKLIST_CLIENT_AUDIENCE
cluster-id: null # CAMUNDA_TASKLIST_CLIENT_CLUSTERID
cloud:
cluster-id: null # CAMUNDA_TASKLIST_CLOUD_CLUSTERID
console-url: null # CAMUNDA_TASKLIST_CLOUD_CONSOLEURL
permission-audience: null # CAMUNDA_TASKLIST_CLOUD_PERMISSIONAUDIENCE
permission-url: null # CAMUNDA_TASKLIST_CLOUD_PERMISSIONURL
database: null # CAMUNDA_TASKLIST_DATABASE
documentation:
api-migration-docs-url: null # CAMUNDA_TASKLIST_DOCUMENTATION_APIMIGRATIONDOCSURL
elasticsearch:
batch-size: null # CAMUNDA_TASKLIST_ELASTICSEARCH_BATCHSIZE
cluster-name: null # CAMUNDA_TASKLIST_ELASTICSEARCH_CLUSTERNAME
connect-timeout: null # CAMUNDA_TASKLIST_ELASTICSEARCH_CONNECTTIMEOUT
create-schema: null # CAMUNDA_TASKLIST_ELASTICSEARCH_CREATESCHEMA
date-format: null # CAMUNDA_TASKLIST_ELASTICSEARCH_DATEFORMAT
els-date-format: null # CAMUNDA_TASKLIST_ELASTICSEARCH_ELSDATEFORMAT
health-check-enabled: null # CAMUNDA_TASKLIST_ELASTICSEARCH_HEALTHCHECKENABLED
index-prefix: null # CAMUNDA_TASKLIST_ELASTICSEARCH_INDEXPREFIX
interceptor-plugins: null # CAMUNDA_TASKLIST_ELASTICSEARCH_INTERCEPTORPLUGINS
max-terms-count: null # CAMUNDA_TASKLIST_ELASTICSEARCH_MAXTERMSCOUNT
password: null # CAMUNDA_TASKLIST_ELASTICSEARCH_PASSWORD
socket-timeout: null # CAMUNDA_TASKLIST_ELASTICSEARCH_SOCKETTIMEOUT
ssl:
certificate-path: null # CAMUNDA_TASKLIST_ELASTICSEARCH_SSL_CERTIFICATEPATH
self-signed: null # CAMUNDA_TASKLIST_ELASTICSEARCH_SSL_SELFSIGNED
verify-hostname: null # CAMUNDA_TASKLIST_ELASTICSEARCH_SSL_VERIFYHOSTNAME
url: null # CAMUNDA_TASKLIST_ELASTICSEARCH_URL
username: null # CAMUNDA_TASKLIST_ELASTICSEARCH_USERNAME
enterprise: null # CAMUNDA_TASKLIST_ENTERPRISE
feature-flag:
allow-non-self-assignment: null # CAMUNDA_TASKLIST_FEATUREFLAG_ALLOWNONSELFASSIGNMENT
process-public-endpoints: null # CAMUNDA_TASKLIST_FEATUREFLAG_PROCESSPUBLICENDPOINTS
identity:
user-access-restrictions-enabled: null # CAMUNDA_TASKLIST_IDENTITY_USERACCESSRESTRICTIONSENABLED
importer:
variable-size-threshold: null # CAMUNDA_TASKLIST_IMPORTER_VARIABLESIZETHRESHOLD
open-search:
aws-enabled: null # CAMUNDA_TASKLIST_OPENSEARCH_AWSENABLED
batch-size: null # CAMUNDA_TASKLIST_OPENSEARCH_BATCHSIZE
cluster-name: null # CAMUNDA_TASKLIST_OPENSEARCH_CLUSTERNAME
connect-timeout: null # CAMUNDA_TASKLIST_OPENSEARCH_CONNECTTIMEOUT
create-schema: null # CAMUNDA_TASKLIST_OPENSEARCH_CREATESCHEMA
date-format: null # CAMUNDA_TASKLIST_OPENSEARCH_DATEFORMAT
els-date-format: null # CAMUNDA_TASKLIST_OPENSEARCH_ELSDATEFORMAT
health-check-enabled: null # CAMUNDA_TASKLIST_OPENSEARCH_HEALTHCHECKENABLED
index-prefix: null # CAMUNDA_TASKLIST_OPENSEARCH_INDEXPREFIX
interceptor-plugins: null # CAMUNDA_TASKLIST_OPENSEARCH_INTERCEPTORPLUGINS
max-terms-count: null # CAMUNDA_TASKLIST_OPENSEARCH_MAXTERMSCOUNT
password: null # CAMUNDA_TASKLIST_OPENSEARCH_PASSWORD
socket-timeout: null # CAMUNDA_TASKLIST_OPENSEARCH_SOCKETTIMEOUT
ssl:
certificate-path: null # CAMUNDA_TASKLIST_OPENSEARCH_SSL_CERTIFICATEPATH
self-signed: null # CAMUNDA_TASKLIST_OPENSEARCH_SSL_SELFSIGNED
verify-hostname: null # CAMUNDA_TASKLIST_OPENSEARCH_SSL_VERIFYHOSTNAME
url: null # CAMUNDA_TASKLIST_OPENSEARCH_URL
username: null # CAMUNDA_TASKLIST_OPENSEARCH_USERNAME
version: null # CAMUNDA_TASKLIST_VERSION
zeebe:
certificate-path: null # CAMUNDA_TASKLIST_ZEEBE_CERTIFICATEPATH
gateway-address: null # CAMUNDA_TASKLIST_ZEEBE_GATEWAYADDRESS
rest-address: null # CAMUNDA_TASKLIST_ZEEBE_RESTADDRESS
secure: null # CAMUNDA_TASKLIST_ZEEBE_SECURE
zeebe:
broker:
backpressure: null # ZEEBE_BROKER_BACKPRESSURE
cluster: null # ZEEBE_BROKER_CLUSTER
data: null # ZEEBE_BROKER_DATA
execution-metrics-exporter-enabled: null # ZEEBE_BROKER_EXECUTIONMETRICSEXPORTERENABLED
experimental: null # ZEEBE_BROKER_EXPERIMENTAL
exporters: null # ZEEBE_BROKER_EXPORTERS
exporting: null # ZEEBE_BROKER_EXPORTING
flow-control: null # ZEEBE_BROKER_FLOWCONTROL
gateway: null # ZEEBE_BROKER_GATEWAY
network: null # ZEEBE_BROKER_NETWORK
threads: null # ZEEBE_BROKER_THREADS
gateway:
cluster: null # ZEEBE_GATEWAY_CLUSTER
filters: null # ZEEBE_GATEWAY_FILTERS
interceptors: null # ZEEBE_GATEWAY_INTERCEPTORS
long-polling: null # ZEEBE_GATEWAY_LONGPOLLING
network: null # ZEEBE_GATEWAY_NETWORK
security: null # ZEEBE_GATEWAY_SECURITY
threads: null # ZEEBE_GATEWAY_THREADS
import json
import yaml
import sys
def to_env_var(name):
return name.replace(".", "_").replace("-", "").upper()
def metadata_to_yaml(input_file, output_file):
try:
with open(input_file, "r") as f:
metadata = json.load(f)
except Exception as e:
print(f"Error reading {input_file}: {e}")
return
properties = metadata.get("properties", [])
# We'll build the dict first to ensure structure
config_dict = {}
env_map = {}
desc_map = {}
for prop in properties:
if prop.get("deprecated"):
continue
name = prop.get("name")
if not name:
continue
default_value = prop.get("defaultValue")
description = prop.get("description")
env_var = to_env_var(name)
parts = name.split(".")
current = config_dict
for part in parts[:-1]:
if part not in current:
current[part] = {}
current = current[part]
current[parts[-1]] = default_value
env_map[name] = env_var
if description:
desc_map[name] = description
def write_with_comments(d, indent=0, path=""):
lines = []
for key, value in d.items():
current_path = f"{path}.{key}" if path else key
if isinstance(value, dict):
lines.append(" " * indent + f"{key}:")
lines.extend(write_with_comments(value, indent + 1, current_path))
else:
description = desc_map.get(current_path)
if description:
# Clean up description (remove HTML tags and newlines)
clean_desc = (
description.replace("<p>", "")
.replace("<\/p>", "")
.replace("\n", " ")
)
import re
clean_desc = re.sub("<[^<]+?>", "", clean_desc)
lines.append(" " * indent + f"# {clean_desc}")
env_var = env_map.get(current_path, "")
val_str = json.dumps(value) if value is not None else "null"
lines.append(" " * indent + f"{key}: {val_str} # {env_var}")
return lines
try:
lines = write_with_comments(config_dict)
with open(output_file, "w") as f:
f.write("\n".join(lines) + "\n")
print(f"Successfully generated {output_file} with env var comments")
except Exception as e:
print(f"Error writing {output_file}: {e}")
if __name__ == "__main__":
input_path = (
"configuration/target/classes/META-INF/spring-configuration-metadata.json"
)
output_path = "config_defaults.yaml"
metadata_to_yaml(input_path, output_path)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment