Skip to content

Instantly share code, notes, and snippets.

@anomit
Created December 1, 2025 11:31
Show Gist options
  • Select an option

  • Save anomit/ac0d99cd782b500e4903536fd65b5487 to your computer and use it in GitHub Desktop.

Select an option

Save anomit/ac0d99cd782b500e4903536fd65b5487 to your computer and use it in GitHub Desktop.

DSV Node Setup Quick Start (Devnet)

These quick-start instructions will help you get your own DSV node started: https://github.com/powerloom/snapshot-sequencer-validator/blob/develop/docs/DSV_NODE_SETUP.md#quick-start

Copy one of the env files to .env within the snapshotter-sequencer-validator directory, depending on your deployment preference

For eg,

cp env.ext-ipfs .env

# OR

cp env.int-ipfs .env

Then run the setup and control script with

# with internal IPFS node running on the same instance
./dsv.sh start --with-vpa --with-ipfs && ./dsv.sh logs

or

# to use an externally hosted IPFS node
./dsv.sh start --with-vpa && ./dsv.sh logs

BUT FIRST,


env fields to modify for your specific needs

Validator identity addresses and VPA specific settings

Tip

Use a tool like vanity eth https://vanity-eth.tk/ to generate unique addresses/keys for your validator.

Important

I have to set these identites manually on the protocol state contracts as of now. Once you generate these addresses, reach out to me.

VPA_VALIDATOR_ADDRESS

This is the main identity of the validator that identifies it on the protocol state contracts.

VPA_SIGNER_ADDRESSES

This is the list of addresses that are authorized to submit batches to the protocol state contracts. Essentially, the inbuilt transaction relayer that is used to submit batches to the protocol state contracts picks them up.

VPA_SIGNER_PRIVATE_KEYS

Private keys corresponding to the addresses in VPA_SIGNER_ADDRESSES.

Important

At the moment, it is best to use the main identity as the first signer and then add another signer for redundancy.

Self hosted/internal IPFS node configuration

# Storage provider: ipfs, arweave, filecoin
# NOTE: Requires corresponding storage service to be running
STORAGE_PROVIDER=ipfs
 
# IPFS node address (ONLY if using IPFS for finalized batch storage)
# You need to run an IPFS node separately if finalizer is enabled
IPFS_HOST=/dns/ipfs/tcp/5001
# For Docker: use ipfs:5001

# IPFS Port Configuration (for Docker deployment)
# ============================================
# Controls which ports are exposed from the IPFS container
# Only change if you have port conflicts on the host system

# IPFS API port (required for DSV components to communicate with IPFS)
# Default: 5001
IPFS_API_PORT=5001

# IPFS Swarm port (P2P networking - allows other IPFS nodes to connect)
# Default: 4001
IPFS_SWARM_PORT=4001

# ============================================
# IPFS CLEANUP CONFIGURATION (for local IPFS service)
# ============================================
# These settings only apply when using the built-in IPFS service (./dsv.sh start --with-ipfs)
# Automatically unpins old CIDs to prevent storage bloat

# Maximum age for pins before cleanup (in days)
# CIDs older than this will be unpinned automatically
IPFS_CLEANUP_MAX_AGE_DAYS=7

# Cleanup interval (in hours)
# How often to run the cleanup process
IPFS_CLEANUP_INTERVAL_HOURS=72  # Every 3 days


# IPFS Data Directory Configuration
# ============================================
# Controls where IPFS data is stored on the host system
# This allows mounting IPFS data on large partitions or dedicated storage

# Host directory for IPFS data storage
# Default: /data/ipfs (use large partition when available)
# For production: Set to your high-capacity storage mount point
IPFS_DATA_DIR=/data/ipfs

# Example configurations:
# - Large partition: IPFS_DATA_DIR=/mnt/storage/ipfs
# - Dedicated disk: IPFS_DATA_DIR=/media/nvme1n1/ipfs
# - Default location: IPFS_DATA_DIR=/data/ipfs

Keeping the IPFS_HOST as /dns/ipfs/tcp/5001 along with running ./dsv.sh start --with-ipfs --with-vpa will start the DSV node with a self hosted/internal IPFS node launched via Docker Compose.

The IPFS_DATA_DIR is the directory where the IPFS node will store its data. It is set to /data/ipfs by default. You can change it to your preferred location, make sure to have it mounted to the IPFS container.

SEQUENCER_ID

set it to any string at the moment. I have filled up validator3 for you

P2P_PORT

This should be open to the public to help with discovery, DHT exchange, gossipsub mesh formation. Without this DSV nodes are as good as partitioned out of the network and wont be able to either catch snapshot submissions nor exchange validator votes and attestations.

PRIVATE_KEY

  • this is a libp2p networking stack specific private key that identifies your DSV node in the networ uniquely, in combination with PUBLIC_IP and P2P_PORT.
  • It is important for this combination to be unique
  • for eg, after a first few runs of the DSV node, you change the private key but restart it on the same port, this will cause conflict with stale DHT entries that might cause your restarted DSV node to be kicked off the network until the old entries expire.

How to generate it?

~$ cd snapshot-sequencer-validator/
~/snapshot-sequencer-validator$ cd key_generator/
~/snapshot-sequencer-validator/key_generator$ go run generate_key.go

Generated Private Key (hex): 9f71be7a267acc1ef4a7f71b8e3b7ba6767cda28d574a472b876f83f80a0c811bea3a61d0923bb4325fc700473135d243cbd013be51b6e45f74e9ae2f3cdac39
Derived Peer ID: 12D3KooWNeYN2Lq9NEUvTn6CXVyRFx4P1YtT5PbMGxfVxUa2C8Gp

[ . . . ]

PUBLIC_IP

self explanatory

RPC URLs

There can be multiple URLs configured, comma separated format is supported.

POWERLOOM_RPC_NODES

This is the URL of the Powerloom protocol chain RPC node to use for the DSV node. Currently it is set to the public devnet RPC node endpoint. If you face issues with rate limits etc, reach out to us.

SOURCE_RPC_NODES

This is the URL of the source RPC node to use for the DSV node, Ethereum Mainnet for this BDS datamarket.

# Decentralized Sequencer Environment Variables
# the following keys arent configured here but have corresponding default values that are picked up by docker compose or the build scripts so chill
# 1. AGGREGATION_WINDOW_SECONDS - Level 2 aggregation window timing
# 2. BATCH_AGGREGATION_INTERVAL - Batch aggregation check interval
# 3. BATCH_AGGREGATION_LOGGING_LEVEL - Logging detail for batch aggregation
# 4. BATCH_AGGREGATION_METRICS_ENABLED - Enable detailed batch aggregation metrics
# 5. BATCH_AGGREGATION_TIMEOUT - Timeout for batch aggregation voting
# 6. CONN_MANAGER_HIGH_WATER - Connection manager high water mark
# 7. CONN_MANAGER_LOW_WATER - Connection manager low water mark
# 8. ENABLE_SLOT_VALIDATION - EIP-712 signature validation toggle
# ============================================
# CORE CONFIGURATION
# ============================================
# Unique identifier for this sequencer instance
SEQUENCER_ID=validator3
# P2P port for libp2p networking
P2P_PORT=8003
# Redis configuration (required for dequeuer functionality)
# Hostname or IP (use 'redis' for Docker)
REDIS_HOST=redis
# Port number
REDIS_BIND_PORT=6380
# Database number (0-15)
REDIS_DB=0
# Password (leave empty if no auth)
REDIS_PASSWORD=
# RabbitMQ configuration (required for relayer-py service)
# Use 'rabbitmq' for Docker, 'localhost' for local binary
RABBITMQ_HOST=rabbitmq
# Internal port used by relayer-py to connect to RabbitMQ
RABBITMQ_PORT=5672
# External binding port - controls which host port RabbitMQ is accessible from outside Docker
# Change this if you need to access RabbitMQ from host or have port conflicts (default: 5672)
RABBITMQ_EXTERNAL_PORT=5673
# Management UI port (optional - for web interface access)
RABBITMQ_MGMT_PORT=15673
# Monitor API configuration
MONITOR_API_PORT=9091
# Grafana configuration (for monitoring profile)
# Port for Grafana web interface (default: 3000)
GRAFANA_PORT=3003
# Grafana admin password
GRAFANA_PASSWORD=your-password-here
# ============================================
# P2P NETWORK CONFIGURATION
# ============================================
# Bootstrap node multiaddr (REQUIRED for P2P)
# Format: /ip4/<IP>/tcp/<PORT>/p2p/<PEER_ID>
# Example: /ip4/YOUR.SERVER.IP.HERE/tcp/9100/p2p/YOUR_PEER_ID_HERE
BOOTSTRAP_PEERS=/ip4/159.89.94.56/tcp/9101/p2p/12D3KooWQM5fN7nFxGQeY7acpatnaLteFkmthHyMgG5grwTqAAio,/ip4/143.198.161.179/tcp/9101/p2p/12D3KooWHmvWNk3kVfbh73KRZZ54j5vy2HN2AC4oXFr8JrVNnnQr
# Private key for P2P identity (optional - will generate if not provided)
# Format: hex-encoded Ed25519 private key
# Leave empty to generate a new key on each start
PRIVATE_KEY=xccfdfv8978f5e7002716ec61b500031aa1669d31df85442e88deb0b02b4b40c5de50b59c00e6f931cbcddf9e338b17bd28afdcc7c38c2746540c7f3fe7b1750091
# Derived Peer ID: 12Dfdsf43r34rdfr4345345
RENDEZVOUS_POINT=powerloom-dsv-devnet-alpha
PUBLIC_IP=x.x.x.x
# ============================================
# GOSSIPSUB TOPIC CONFIGURATION
# ============================================
# Configure gossipsub topic names for P2P networking
# These allow customization of topic names for different deployment scenarios
# Snapshot submission topics (discovery and main submissions)
# Format: {prefix}/0 (discovery), {prefix}/all (submissions)
GOSSIPSUB_SNAPSHOT_SUBMISSION_PREFIX=/powerloom/dsv-devnet-alpha/snapshot-submissions
# Finalized batch topics (discovery and batch exchange)
# Format: {prefix}/0 (discovery), {prefix}/all (batches)
GOSSIPSUB_FINALIZED_BATCH_PREFIX=/powerloom/dsv-devnet-alpha/finalized-batches
# Validator consensus topics
# Presence topic for validator heartbeat and discovery
GOSSIPSUB_VALIDATOR_PRESENCE_TOPIC=/powerloom/dsv-devnet-alpha/validator/presence
# Consensus voting and proposal topics
# Used for validator consensus and batch aggregation coordination
GOSSIPSUB_CONSENSUS_VOTES_TOPIC=/powerloom/dsv-devnet-alpha/consensus/votes
GOSSIPSUB_CONSENSUS_PROPOSALS_TOPIC=/powerloom/dsv-devnet-alpha/consensus/proposals
# ============================================
# COMPONENT TOGGLES
# ============================================
# Enable/disable individual components (true/false)
ENABLE_LISTENER=true # P2P gossipsub listener
ENABLE_DEQUEUER=true # Redis queue processor
ENABLE_EVENT_MONITOR=true # Event monitor for EpochReleased events
ENABLE_FINALIZER=true # Batch finalizer
ENABLE_BATCH_AGGREGATION=true # Consensus voting
# ============================================
# BATCH AGGREGATION CONFIGURATION
# ============================================
# ⚠️ IMPORTANT: These values are for TESTING ONLY!
# In production/mainnet, ALL aggregation parameters MUST be read from:
# - Protocol State Contract
# - Data Market Contracts
# Environment variables CANNOT override on-chain parameters in production
# Batch aggregation parameters (TESTING ONLY - mainnet reads from contracts)
# Percentage of validators required for consensus (67%)
VOTING_THRESHOLD=0.67
# Minimum number of validators for valid consensus
MIN_VALIDATORS=3
# Timeout for consensus voting in seconds (5 minutes)
CONSENSUS_TIMEOUT=300
# How often to check consensus status
CONSENSUS_INTERVAL=60
# Level 2 Aggregation Window (Network-wide batch collection)
# Time to wait for validator finalizations before aggregating network consensus
# First remote batch arrival starts timer, additional batches collected during window
# Window expiration triggers final Level 2 aggregation combining all validator views
# IMPORTANT: This should be coordinated with contract submission window timing
# If contract has preSubmissionWindow=0, this gives validators time to aggregate votes
# before submissions can start (since submissions can start immediately after epoch release)
AGGREGATION_WINDOW_SECONDS=30
# Validator settings (TESTING ONLY - mainnet reads from contracts)
# Minimum POWER tokens required to be a validator
VALIDATOR_STAKE_THRESHOLD=1000
# Maximum stake considered for voting weight
VALIDATOR_MAX_STAKE=100000
# Consensus monitoring and logging
# Logging detail for consensus process
CONSENSUS_LOGGING_LEVEL=info
# Enable detailed metrics for consensus tracking
CONSENSUS_METRICS_ENABLED=true
# ============================================
# DEQUEUER CONFIGURATION
# ============================================
# For Docker Compose scaling (number of dequeuer containers)
DEQUEUER_REPLICAS=1
# ============================================
# FINALIZER CONFIGURATION
# (ONLY NEEDED IF ENABLE_FINALIZER=true)
# ============================================
# The finalizer component handles batch finalization and storage
# These settings are inherited from the centralized sequencer
# and are NOT needed for basic submission listening/processing
# Number of finalizer instances (for redundancy)
FINALIZER_REPLICAS=1
# Storage provider: ipfs, arweave, filecoin
# NOTE: Requires corresponding storage service to be running
STORAGE_PROVIDER=ipfs
# IPFS node address (ONLY if using IPFS for finalized batch storage)
# You need to run an IPFS node separately if finalizer is enabled
# use multiaddr format for eg /dns/external-ipfs/tcp/5001 or /ip4/172.29.0.2/tcp/5001
IPFS_HOST=/dns/external-ipfs/tcp/5001
# For Docker: use ipfs:5001
# IPFS Port Configuration (for Docker deployment)
# ============================================
# Controls which ports are exposed from the IPFS container
# Only change if you have port conflicts on the host system
# IPFS API port (required for DSV components to communicate with IPFS)
# Default: 5001
# IPFS_API_PORT=5001
# IPFS Swarm port (P2P networking - allows other IPFS nodes to connect)
# Default: 4001
# IPFS_SWARM_PORT=4001
# ============================================
# IPFS CLEANUP CONFIGURATION (for local IPFS service)
# ============================================
# These settings only apply when using the built-in IPFS service (./dsv.sh start --with-ipfs)
# Automatically unpins old CIDs to prevent storage bloat
# Maximum age for pins before cleanup (in days)
# CIDs older than this will be unpinned automatically
# IPFS_CLEANUP_MAX_AGE_DAYS=7
# Cleanup interval (in hours)
# How often to run the cleanup process
# IPFS_CLEANUP_INTERVAL_HOURS=72 # Every 3 days
# IPFS Data Directory Configuration
# ============================================
# Controls where IPFS data is stored on the host system
# This allows mounting IPFS data on large partitions or dedicated storage
# Host directory for IPFS data storage
# Default: /data/ipfs (use large partition when available)
# For production: Set to your high-capacity storage mount point
# IPFS_DATA_DIR=/data/ipfs
# Example configurations:
# - Large partition: IPFS_DATA_DIR=/mnt/storage/ipfs
# - Dedicated disk: IPFS_DATA_DIR=/media/nvme1n1/ipfs
# - Default location: IPFS_DATA_DIR=/data/ipfs
# Data availability layer: none, eigenda, celestia, avail
# For future integration with DA layers
DA_PROVIDER=none
# RPC Configuration (Required for event monitoring and contract interactions)
# All RPC interactions in this component are with the Powerloom protocol chain
POWERLOOM_RPC_NODES=https://rpc-devnet.powerloom.dev
# Optional archive nodes for historical queries
POWERLOOM_ARCHIVE_RPC_NODES=[]
# Source Chain RPC Configuration (JSON array)
# where data is sourced from (e.g., Ethereum, Polygon)
SOURCE_RPC_NODES=<eth-node-url>
# Source Archive nodes (optional, JSON array)
SOURCE_ARCHIVE_RPC_NODES=[]
# Protocol State Contract (REQUIRED for event monitoring)
PROTOCOL_STATE_CONTRACT=0x3B5A0FB70ef68B5dd677C7d614dFB89961f97401
# Data Market Addresses (JSON array or comma-separated)
# These are the markets this sequencer will monitor
DATA_MARKET_ADDRESSES=0xb5cE2F9B71e785e3eC0C45EDE06Ad95c3bb71a4d
# ============================================
# EVENT MONITORING CONFIGURATION
# ============================================
# Path to the Legacy Protocol State Contract ABI JSON file
# This file contains the ABI for parsing EpochReleased events from legacy contracts
CONTRACT_ABI_PATH=./abi/LegacyProtocolState.json
# Path to VPA (ValidatorPriorityAssigner) contract ABI for priority monitoring
VPAContractABIPath=./abi/ValidatorPriorityAssigner.json
# Level 1 Finalization Delay in seconds (used when snapshot commit/reveal is disabled)
# When snapshot commit/reveal windows are disabled, Level 1 finalization triggers after this delay.
# This delay must be BEFORE P1 window closes to allow time for Level 1 + Level 2 aggregation.
# Legacy env var SUBMISSION_WINDOW_DURATION is still supported for backward compatibility.
LEVEL1_FINALIZATION_DELAY_SECONDS=20
# Maximum concurrent submission windows
MAX_CONCURRENT_WINDOWS=100
# Event polling interval in seconds
EVENT_POLL_INTERVAL=1
# Starting block for event monitoring (0 = latest)
EVENT_START_BLOCK=0
# Number of blocks to process in batch
EVENT_BLOCK_BATCH_SIZE=10
# ============================================
# IDENTITY & VERIFICATION
# ============================================
# Full node addresses (JSON array or comma-separated)
# These addresses bypass certain verification checks by the dequeuer
FULL_NODE_ADDRESSES=
# Skip identity verification (for testing only)
SKIP_IDENTITY_VERIFICATION=true
# Check for flagged snapshotters
CHECK_FLAGGED_SNAPSHOTTERS=true
# Verification cache TTL in seconds
VERIFICATION_CACHE_TTL=600
# ============================================
# WORKER CONFIGURATION
# ============================================
# Number of dequeuer workers (parallel submission processors)
# NOTE: Now supports multiple submission formats (single & P2PSnapshotSubmission)
DEQUEUER_WORKERS=1
# Number of finalizer workers (parallel batch processors)
# Workers now support camelCase data models and batch processing
FINALIZER_WORKERS=5
# Batch size for parallel finalization (projects per batch part)
# Supports flexible batch sizes for high and low volume epochs
FINALIZATION_BATCH_SIZE=20
# Conversion strategy for submission format (default: auto-detect)
# Options: 'auto' (recommended), 'single', 'batch'
SUBMISSION_FORMAT_STRATEGY=auto
# ============================================
# DEDUPLICATION CONFIGURATION
# ============================================
# Enable deduplication
DEDUP_ENABLED=true
# Local cache size for deduplication
DEDUP_LOCAL_CACHE_SIZE=10000
# Deduplication TTL in seconds
DEDUP_TTL_SECONDS=7200
# ============================================
# PERFORMANCE TUNING
# ============================================
# Connection manager settings
CONN_MANAGER_LOW_WATER=100
CONN_MANAGER_HIGH_WATER=400
# Gossipsub heartbeat interval in milliseconds
GOSSIPSUB_HEARTBEAT_MS=700
# Maximum submissions per epoch
MAX_SUBMISSIONS_PER_EPOCH=100
# ============================================
# DEBUGGING & MONITORING
# ============================================
# Enable debug logging (true/false)
DEBUG_MODE=true
LOG_LEVEL=debug
# Metrics port for Prometheus
METRICS_PORT=9090
METRICS_ENABLED=false
# Slack webhook for alerts (optional)
SLACK_WEBHOOK_URL=
# Enable advanced pipeline health monitoring
# Provides more detailed alerts and tracking for batch processing stages
PIPELINE_HEALTH_MONITORING=false
# API configuration
API_HOST=0.0.0.0
API_PORT=8080
API_AUTH_TOKEN=
# ============================================
# NEW CONTRACT CONFIGURATION (VPA-ENABLED DEPLOYMENT)
# ============================================
# Enable submission to new VPA-enabled contracts
# When true, DSV submits to new contracts via relayer-py
# When false (default), DSV does NO contract submission
# Legacy contracts are handled by separate snapshotter-lite-v2 workflow directly to centralized sequencer
USE_NEW_CONTRACTS=true
# Enable on-chain submission (required for VPA client initialization)
# When true, aggregator initializes VPA client for priority checking
# When false, VPA client is not initialized and priority checks are skipped
# Must be true when USE_NEW_CONTRACTS=true for VPA-based submissions
ENABLE_ONCHAIN_SUBMISSION=true
# New Protocol State contract address (VPA-enabled)
# This contract has ValidatorPriorityAssigner linked internally
NEW_PROTOCOL_STATE_CONTRACT=0xC9e7304f719D35919b0371d8B242ab59E0966d63
# New Data Market contract address (VPA-enabled)
# This contract works with the new Protocol State contract
NEW_DATA_MARKET_CONTRACT=0xb6c1392944a335b72b9e34f9D4b8c0050cdb511f
# relayer-py service endpoint for submitting batches to new contracts
# When USE_NEW_CONTRACTS=true, aggregator calls relayer-py HTTP API
RELAYER_PY_ENDPOINT=http://relayer-py:8080
# Anchor chain ID for relayer-py transaction signing
# Chain ID of the Powerloom protocol chain (default: 11167 for devnet)
# Required for relayer-py to sign transactions correctly
ANCHOR_CHAIN_ID=11167
# Minimum signer balance check for relayer-py (default: 0 to disable for devnet)
# Set to 0 to disable balance checking, or set a minimum ETH amount (e.g., 0.1)
MIN_SIGNER_BALANCE_ETH=0
# VPA validator address for priority checking
# This validator's address to check if it has priority for current epoch
VPA_VALIDATOR_ADDRESS=
# VPA contract address for priority monitoring (ValidatorPriorityAssigner)
# This is automatically fetched from ProtocolState contract - DO NOT SET
# ============================================
# VPA RELAYER-PY MULTI-SIGNER CONFIGURATION (ADVANCED)
# ============================================
# Multi-signer configuration for relayer-py transaction signing
# This is OPTIONAL - relayer-py will use its own settings.json if not provided
# Most users should configure signers in relayer-py/settings.json directly
# Multiple authorized signer addresses (comma-separated)
# These addresses are authorized to submit batches to VPA contracts
# IMPORTANT: Use different signer addresses than validator1 for security isolation
VPA_SIGNER_ADDRESSES=<your-validator-address>,<your-signer-address>
# Private keys for the signers (comma-separated, MUST match order above)
# WARNING: Keep these secure and never commit to version control
# IMPORTANT: Use different private keys than validator1 for security isolation
VPA_SIGNER_PRIVATE_KEYS=<your-validator-private-key>,<your-signer-private-key>
# ============================================
# QUICK START CONFIGURATIONS
# ============================================
# For LOCAL TESTING (everything on one machine):
# - Use defaults above
# - Set REDIS_ADDR=localhost:6379
# - Leave PRIVATE_KEY empty (will auto-generate)
# - Set DEBUG_MODE=true
# For DOCKER COMPOSE:
# - Set REDIS_ADDR=redis:6379
# - Set IPFS_HOST=ipfs:5001
# - Use the bootstrap multiaddr above
# For PRODUCTION:
# - Generate unique PRIVATE_KEY for each instance
# - Set proper RPC_URL with your API key
# - Set DEBUG_MODE=false
# - Configure STORAGE_PROVIDER and DA_PROVIDER as needed
# For TESTING New Contracts (DSV submission enabled):
# - Set USE_NEW_CONTRACTS=true
# - Configure NEW_PROTOCOL_STATE_CONTRACT and NEW_DATA_MARKET_CONTRACT addresses
# - Start with ./dsv.sh start --with-vpa --with-ipfs
# (relayer-py will use its default settings.json)
# IMPORTANT: DSV does NO contract submission when USE_NEW_CONTRACTS=false
# For TESTING (Environment-based Signers):
# - Set USE_NEW_CONTRACTS=true
# - Configure NEW_*_CONTRACT addresses
# - Set RELAYER_SIGNER_ADDRESSES and RELAYER_SIGNER_PRIVATE_KEYS (comma-separated)
# - Start with ./dsv.sh start --with-vpa --with-ipfs
# (signers configured via environment variables)
# For PRODUCTION:
# - Configure relayer-py/settings.json with production signers and RPC endpoints
# - Or use RELAYER_SIGNER_* environment variables for automated deployment
# - Monitor submission success rates via aggregator and relayer-py logs
# - Set DEBUG_MODE=false
# - Configure STORAGE_PROVIDER and DA_PROVIDER as needed
# Decentralized Sequencer Environment Variables
# the following keys arent configured here but have corresponding default values that are picked up by docker compose or the build scripts so chill
# 1. AGGREGATION_WINDOW_SECONDS - Level 2 aggregation window timing
# 2. BATCH_AGGREGATION_INTERVAL - Batch aggregation check interval
# 3. BATCH_AGGREGATION_LOGGING_LEVEL - Logging detail for batch aggregation
# 4. BATCH_AGGREGATION_METRICS_ENABLED - Enable detailed batch aggregation metrics
# 5. BATCH_AGGREGATION_TIMEOUT - Timeout for batch aggregation voting
# 6. CONN_MANAGER_HIGH_WATER - Connection manager high water mark
# 7. CONN_MANAGER_LOW_WATER - Connection manager low water mark
# 8. ENABLE_SLOT_VALIDATION - EIP-712 signature validation toggle
# ============================================
# CORE CONFIGURATION
# ============================================
# Unique identifier for this sequencer instance
SEQUENCER_ID=validator3
# P2P port for libp2p networking
P2P_PORT=9001
# Redis configuration (required for dequeuer functionality)
# Hostname or IP (use 'redis' for Docker)
REDIS_HOST=redis
# Port number
REDIS_BIND_PORT=6380
# Database number (0-15)
REDIS_DB=0
# Password (leave empty if no auth)
REDIS_PASSWORD=
# RabbitMQ configuration (required for relayer-py service)
# Use 'rabbitmq' for Docker, 'localhost' for local binary
RABBITMQ_HOST=rabbitmq
# Internal port used by relayer-py to connect to RabbitMQ
RABBITMQ_PORT=5672
# External binding port - controls which host port RabbitMQ is accessible from outside Docker
# Change this if you need to access RabbitMQ from host or have port conflicts (default: 5672)
RABBITMQ_EXTERNAL_PORT=5672
# Management UI port (optional - for web interface access)
RABBITMQ_MGMT_PORT=15672
# Monitor API configuration
MONITOR_API_PORT=9091
# Grafana configuration (for monitoring profile)
# Port for Grafana web interface (default: 3000)
GRAFANA_PORT=3003
# Grafana admin password
GRAFANA_PASSWORD=your-password-here
# ============================================
# P2P NETWORK CONFIGURATION
# ============================================
# Bootstrap node multiaddr (REQUIRED for P2P)
# Format: /ip4/<IP>/tcp/<PORT>/p2p/<PEER_ID>
# Example: /ip4/YOUR.SERVER.IP.HERE/tcp/9100/p2p/YOUR_PEER_ID_HERE
BOOTSTRAP_PEERS=/ip4/159.89.94.56/tcp/9101/p2p/12D3KooWQM5fN7nFxGQeY7acpatnaLteFkmthHyMgG5grwTqAAio,/ip4/143.198.161.179/tcp/9101/p2p/12D3KooWHmvWNk3kVfbh73KRZZ54j5vy2HN2AC4oXFr8JrVNnnQr
# Private key for P2P identity (optional - will generate if not provided)
# Format: hex-encoded Ed25519 private key
# Leave empty to generate a new key on each start
PRIVATE_KEY=xccfdfv8978f5e7002716ec61b500031aa1669d31df85442e88deb0b02b4b40c5de50b59c00e6f931cbcddf9e338b17bd28afdcc7c38c2746540c7f3fe7b1750091
# Derived Peer ID: 12Dfdsf43r34rdfr4345345
RENDEZVOUS_POINT=powerloom-dsv-devnet-alpha
PUBLIC_IP=x.x.x.x
# ============================================
# GOSSIPSUB TOPIC CONFIGURATION
# ============================================
# Configure gossipsub topic names for P2P networking
# These allow customization of topic names for different deployment scenarios
# Snapshot submission topics (discovery and main submissions)
# Format: {prefix}/0 (discovery), {prefix}/all (submissions)
GOSSIPSUB_SNAPSHOT_SUBMISSION_PREFIX=/powerloom/dsv-devnet-alpha/snapshot-submissions
# Finalized batch topics (discovery and batch exchange)
# Format: {prefix}/0 (discovery), {prefix}/all (batches)
GOSSIPSUB_FINALIZED_BATCH_PREFIX=/powerloom/dsv-devnet-alpha/finalized-batches
# Validator consensus topics
# Presence topic for validator heartbeat and discovery
GOSSIPSUB_VALIDATOR_PRESENCE_TOPIC=/powerloom/dsv-devnet-alpha/validator/presence
# Consensus voting and proposal topics
# Used for validator consensus and batch aggregation coordination
GOSSIPSUB_CONSENSUS_VOTES_TOPIC=/powerloom/dsv-devnet-alpha/consensus/votes
GOSSIPSUB_CONSENSUS_PROPOSALS_TOPIC=/powerloom/dsv-devnet-alpha/consensus/proposals
# ============================================
# COMPONENT TOGGLES
# ============================================
# Enable/disable individual components (true/false)
# P2P gossipsub listener
ENABLE_LISTENER=true
# Redis queue processor
ENABLE_DEQUEUER=true
# Event monitor for EpochReleased events
ENABLE_EVENT_MONITOR=true
# Batch finalizer
ENABLE_FINALIZER=true
# Consensus voting
ENABLE_BATCH_AGGREGATION=true
# ============================================
# BATCH AGGREGATION CONFIGURATION
# ============================================
# ⚠️ IMPORTANT: These values are for TESTING ONLY!
# In production/mainnet, ALL aggregation parameters MUST be read from:
# - Protocol State Contract
# - Data Market Contracts
# Environment variables CANNOT override on-chain parameters in production
# Batch aggregation parameters (TESTING ONLY - mainnet reads from contracts)
# Percentage of validators required for consensus (67%)
VOTING_THRESHOLD=0.67
# Minimum number of validators for valid consensus
MIN_VALIDATORS=3
# Timeout for consensus voting in seconds (5 minutes)
CONSENSUS_TIMEOUT=300
# How often to check consensus status
CONSENSUS_INTERVAL=60
# Level 2 Aggregation Window (Network-wide batch collection)
# Time to wait for validator finalizations before aggregating network consensus
# First remote batch arrival starts timer, additional batches collected during window
# Window expiration triggers final Level 2 aggregation combining all validator views
# IMPORTANT: This should be coordinated with contract submission window timing
# If contract has preSubmissionWindow=0, this gives validators time to aggregate votes
# before submissions can start (since submissions can start immediately after epoch release)
AGGREGATION_WINDOW_SECONDS=30
# Validator settings (TESTING ONLY - mainnet reads from contracts)
# Minimum POWER tokens required to be a validator
VALIDATOR_STAKE_THRESHOLD=1000
# Maximum stake considered for voting weight
VALIDATOR_MAX_STAKE=100000
# Consensus monitoring and logging
# Logging detail for consensus process
CONSENSUS_LOGGING_LEVEL=info
# Enable detailed metrics for consensus tracking
CONSENSUS_METRICS_ENABLED=true
# ============================================
# DEQUEUER CONFIGURATION
# ============================================
# For Docker Compose scaling (number of dequeuer containers)
DEQUEUER_REPLICAS=1
# ============================================
# FINALIZER CONFIGURATION
# (ONLY NEEDED IF ENABLE_FINALIZER=true)
# ============================================
# The finalizer component handles batch finalization and storage
# These settings are inherited from the centralized sequencer
# and are NOT needed for basic submission listening/processing
# Number of finalizer instances (for redundancy)
FINALIZER_REPLICAS=1
# Storage provider: ipfs, arweave, filecoin
# NOTE: Requires corresponding storage service to be running
STORAGE_PROVIDER=ipfs
# IPFS node address (ONLY if using IPFS for finalized batch storage)
# You need to run an IPFS node separately if finalizer is enabled
IPFS_HOST=/dns/ipfs/tcp/5001
# For Docker: use ipfs:5001
# IPFS Port Configuration (for Docker deployment)
# ============================================
# Controls which ports are exposed from the IPFS container
# Only change if you have port conflicts on the host system
# IPFS API port (required for DSV components to communicate with IPFS)
# Default: 5001
IPFS_API_PORT=5001
# IPFS Swarm port (P2P networking - allows other IPFS nodes to connect)
# Default: 4001
IPFS_SWARM_PORT=4001
# ============================================
# IPFS CLEANUP CONFIGURATION (for local IPFS service)
# ============================================
# These settings only apply when using the built-in IPFS service (./dsv.sh start --with-ipfs)
# Automatically unpins old CIDs to prevent storage bloat
# Maximum age for pins before cleanup (in days)
# CIDs older than this will be unpinned automatically
IPFS_CLEANUP_MAX_AGE_DAYS=7
# Cleanup interval (in hours)
# How often to run the cleanup process
IPFS_CLEANUP_INTERVAL_HOURS=72 # Every 3 days
# IPFS Data Directory Configuration
# ============================================
# Controls where IPFS data is stored on the host system
# This allows mounting IPFS data on large partitions or dedicated storage
# Host directory for IPFS data storage
# Default: /data/ipfs (use large partition when available)
# For production: Set to your high-capacity storage mount point
IPFS_DATA_DIR=/data/ipfs
# Example configurations:
# - Large partition: IPFS_DATA_DIR=/mnt/storage/ipfs
# - Dedicated disk: IPFS_DATA_DIR=/media/nvme1n1/ipfs
# - Default location: IPFS_DATA_DIR=/data/ipfs
# Data availability layer: none, eigenda, celestia, avail
# For future integration with DA layers
DA_PROVIDER=none
# RPC Configuration (Required for event monitoring and contract interactions)
# All RPC interactions in this component are with the Powerloom protocol chain
POWERLOOM_RPC_NODES=https://rpc-devnet.powerloom.dev
# Optional archive nodes for historical queries
POWERLOOM_ARCHIVE_RPC_NODES=[]
# Source Chain RPC Configuration (JSON array)
# where data is sourced from (e.g., Ethereum, Polygon)
SOURCE_RPC_NODES=<eth-node-url>
# Source Archive nodes (optional, JSON array)
SOURCE_ARCHIVE_RPC_NODES=[]
# Protocol State Contract (REQUIRED for event monitoring)
PROTOCOL_STATE_CONTRACT=0x3B5A0FB70ef68B5dd677C7d614dFB89961f97401
# Data Market Addresses (JSON array or comma-separated)
# These are the markets this sequencer will monitor
DATA_MARKET_ADDRESSES=0xb5cE2F9B71e785e3eC0C45EDE06Ad95c3bb71a4d
# ============================================
# EVENT MONITORING CONFIGURATION
# ============================================
# ABI Directory Configuration
# Base directory where all ABI JSON files are located
# Defaults: /root/abi (Docker), ./abi (local), /app/abi (alternative Docker)
ABI_DIR=/root/abi
# Path to the Legacy Protocol State Contract ABI JSON file
# This file contains the ABI for parsing EpochReleased events from legacy contracts
# Note: This path is relative to ABI_DIR if ABI_DIR is set, otherwise absolute
CONTRACT_ABI_PATH=./abi/LegacyProtocolState.json
# Path to VPA (ValidatorPriorityAssigner) contract ABI for priority monitoring
VPAContractABIPath=./abi/ValidatorPriorityAssigner.json
# Level 1 Finalization Delay in seconds (used when snapshot commit/reveal is disabled)
# When snapshot commit/reveal windows are disabled, Level 1 finalization triggers after this delay.
# This delay must be BEFORE P1 window closes to allow time for Level 1 + Level 2 aggregation.
# Legacy env var SUBMISSION_WINDOW_DURATION is still supported for backward compatibility.
LEVEL1_FINALIZATION_DELAY_SECONDS=20
# Maximum concurrent submission windows
MAX_CONCURRENT_WINDOWS=100
# Event polling interval in seconds
EVENT_POLL_INTERVAL=1
# Starting block for event monitoring (0 = latest)
EVENT_START_BLOCK=0
# Number of blocks to process in batch
EVENT_BLOCK_BATCH_SIZE=10
# ============================================
# IDENTITY & VERIFICATION
# ============================================
# Full node addresses (JSON array or comma-separated)
# These addresses bypass certain verification checks by the dequeuer
FULL_NODE_ADDRESSES=
# Skip identity verification (for testing only)
SKIP_IDENTITY_VERIFICATION=true
# Check for flagged snapshotters
CHECK_FLAGGED_SNAPSHOTTERS=true
# Verification cache TTL in seconds
VERIFICATION_CACHE_TTL=600
# ============================================
# WORKER CONFIGURATION
# ============================================
# Number of dequeuer workers (parallel submission processors)
# NOTE: Now supports multiple submission formats (single & P2PSnapshotSubmission)
DEQUEUER_WORKERS=1
# Number of finalizer workers (parallel batch processors)
# Workers now support camelCase data models and batch processing
FINALIZER_WORKERS=5
# Batch size for parallel finalization (projects per batch part)
# Supports flexible batch sizes for high and low volume epochs
FINALIZATION_BATCH_SIZE=20
# Conversion strategy for submission format (default: auto-detect)
# Options: 'auto' (recommended), 'single', 'batch'
SUBMISSION_FORMAT_STRATEGY=auto
# ============================================
# DEDUPLICATION CONFIGURATION
# ============================================
# Enable deduplication
DEDUP_ENABLED=true
# Local cache size for deduplication
DEDUP_LOCAL_CACHE_SIZE=10000
# Deduplication TTL in seconds
DEDUP_TTL_SECONDS=7200
# ============================================
# PERFORMANCE TUNING
# ============================================
# Connection manager settings
CONN_MANAGER_LOW_WATER=100
CONN_MANAGER_HIGH_WATER=400
# Gossipsub heartbeat interval in milliseconds
GOSSIPSUB_HEARTBEAT_MS=700
# Maximum submissions per epoch
MAX_SUBMISSIONS_PER_EPOCH=100
# ============================================
# DEBUGGING & MONITORING
# ============================================
# Enable debug logging (true/false)
DEBUG_MODE=true
LOG_LEVEL=debug
# Metrics port for Prometheus
METRICS_PORT=9090
METRICS_ENABLED=false
# Slack webhook for alerts (optional)
SLACK_WEBHOOK_URL=
# Enable advanced pipeline health monitoring
# Provides more detailed alerts and tracking for batch processing stages
PIPELINE_HEALTH_MONITORING=false
# API configuration
API_HOST=0.0.0.0
API_PORT=8080
API_AUTH_TOKEN=
# ============================================
# NEW CONTRACT CONFIGURATION (VPA-ENABLED DEPLOYMENT)
# ============================================
# Enable submission to new VPA-enabled contracts
# When true, DSV submits to new contracts via relayer-py
# When false (default), DSV does NO contract submission
# Legacy contracts are handled by separate snapshotter-lite-v2 workflow directly to centralized sequencer
USE_NEW_CONTRACTS=true
# New Protocol State contract address (VPA-enabled)
# This contract has ValidatorPriorityAssigner linked internally
NEW_PROTOCOL_STATE_CONTRACT=0xC9e7304f719D35919b0371d8B242ab59E0966d63
# New Data Market contract address (VPA-enabled)
# This contract works with the new Protocol State contract
NEW_DATA_MARKET_CONTRACT=0xb6c1392944a335b72b9e34f9D4b8c0050cdb511f
# relayer-py service endpoint for submitting batches to new contracts
# When USE_NEW_CONTRACTS=true, aggregator calls relayer-py HTTP API
RELAYER_PY_ENDPOINT=http://relayer-py:8080
# Anchor chain ID for relayer-py transaction signing
# Chain ID of the Powerloom protocol chain (default: 11167 for devnet)
# Required for relayer-py to sign transactions correctly
ANCHOR_CHAIN_ID=11167
# Minimum signer balance check for relayer-py (default: 0 to disable for devnet)
# Set to 0 to disable balance checking, or set a minimum ETH amount (e.g., 0.1)
MIN_SIGNER_BALANCE_ETH=0
# Enable on-chain submission (required for VPA client initialization)
# When true, aggregator initializes VPA client for priority checking
# When false, VPA client is not initialized and priority checks are skipped
# Must be true when USE_NEW_CONTRACTS=true for VPA-based submissions
ENABLE_ONCHAIN_SUBMISSION=true
# VPA validator address for priority checking
# This validator's address to check if it has priority for current epoch
# Required when USE_NEW_CONTRACTS=true for priority-based submissions
# TODO: Fill in your validator's Ethereum address
VPA_VALIDATOR_ADDRESS=
# VPA contract address for priority monitoring (ValidatorPriorityAssigner)
# This is automatically fetched from ProtocolState contract - DO NOT SET
# ============================================
# VPA RELAYER-PY MULTI-SIGNER CONFIGURATION (ADVANCED)
# ============================================
# Multi-signer configuration for relayer-py transaction signing
# This is OPTIONAL - relayer-py will use its own settings.json if not provided
# Most users should configure signers in relayer-py/settings.json directly
# Multiple authorized signer addresses (comma-separated)
# These addresses are authorized to submit batches to VPA contracts
VPA_SIGNER_ADDRESSES=<your-validator-address>,<your-signer-address>
# Private keys for the signers (comma-separated, MUST match order above)
# WARNING: Keep these secure and never commit to version control
VPA_SIGNER_PRIVATE_KEYS=<your-validator-private-key>,<your-signer-private-key>
# ============================================
# QUICK START CONFIGURATIONS
# ============================================
# For LOCAL TESTING (everything on one machine):
# - Use defaults above
# - Set REDIS_ADDR=localhost:6379
# - Leave PRIVATE_KEY empty (will auto-generate)
# - Set DEBUG_MODE=true
# For DOCKER COMPOSE:
# - Set REDIS_ADDR=redis:6379
# - Set IPFS_HOST=ipfs:5001
# - Use the bootstrap multiaddr above
# For PRODUCTION:
# - Generate unique PRIVATE_KEY for each instance
# - Set proper RPC_URL with your API key
# - Set DEBUG_MODE=false
# - Configure STORAGE_PROVIDER and DA_PROVIDER as needed
# For TESTING New Contracts (DSV submission enabled):
# - Set USE_NEW_CONTRACTS=true
# - Configure NEW_PROTOCOL_STATE_CONTRACT and NEW_DATA_MARKET_CONTRACT addresses
# - Start with ./dsv.sh start --with-vpa --with-ipfs
# (relayer-py will use its default settings.json)
# IMPORTANT: DSV does NO contract submission when USE_NEW_CONTRACTS=false
# For TESTING (Environment-based Signers):
# - Set USE_NEW_CONTRACTS=true
# - Configure NEW_*_CONTRACT addresses
# - Set RELAYER_SIGNER_ADDRESSES and RELAYER_SIGNER_PRIVATE_KEYS (comma-separated)
# - Start with ./dsv.sh start --with-vpa --with-ipfs
# (signers configured via environment variables)
# For PRODUCTION:
# - Configure relayer-py/settings.json with production signers and RPC endpoints
# - Or use RELAYER_SIGNER_* environment variables for automated deployment
# - Monitor submission success rates via aggregator and relayer-py logs
# - Set DEBUG_MODE=false
# - Configure STORAGE_PROVIDER and DA_PROVIDER as needed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment