|
# Decentralized Sequencer Environment Variables |
|
# the following keys arent configured here but have corresponding default values that are picked up by docker compose or the build scripts so chill |
|
|
|
# 1. AGGREGATION_WINDOW_SECONDS - Level 2 aggregation window timing |
|
# 2. BATCH_AGGREGATION_INTERVAL - Batch aggregation check interval |
|
# 3. BATCH_AGGREGATION_LOGGING_LEVEL - Logging detail for batch aggregation |
|
# 4. BATCH_AGGREGATION_METRICS_ENABLED - Enable detailed batch aggregation metrics |
|
# 5. BATCH_AGGREGATION_TIMEOUT - Timeout for batch aggregation voting |
|
# 6. CONN_MANAGER_HIGH_WATER - Connection manager high water mark |
|
# 7. CONN_MANAGER_LOW_WATER - Connection manager low water mark |
|
# 8. ENABLE_SLOT_VALIDATION - EIP-712 signature validation toggle |
|
|
|
# ============================================ |
|
# CORE CONFIGURATION |
|
# ============================================ |
|
|
|
# Unique identifier for this sequencer instance |
|
SEQUENCER_ID=validator3 |
|
# P2P port for libp2p networking |
|
P2P_PORT=9001 |
|
|
|
# Redis configuration (required for dequeuer functionality) |
|
# Hostname or IP (use 'redis' for Docker) |
|
REDIS_HOST=redis |
|
# Port number |
|
REDIS_BIND_PORT=6380 |
|
# Database number (0-15) |
|
REDIS_DB=0 |
|
# Password (leave empty if no auth) |
|
REDIS_PASSWORD= |
|
|
|
# RabbitMQ configuration (required for relayer-py service) |
|
# Use 'rabbitmq' for Docker, 'localhost' for local binary |
|
RABBITMQ_HOST=rabbitmq |
|
# Internal port used by relayer-py to connect to RabbitMQ |
|
RABBITMQ_PORT=5672 |
|
# External binding port - controls which host port RabbitMQ is accessible from outside Docker |
|
# Change this if you need to access RabbitMQ from host or have port conflicts (default: 5672) |
|
RABBITMQ_EXTERNAL_PORT=5672 |
|
# Management UI port (optional - for web interface access) |
|
RABBITMQ_MGMT_PORT=15672 |
|
|
|
# Monitor API configuration |
|
MONITOR_API_PORT=9091 |
|
|
|
|
|
# Grafana configuration (for monitoring profile) |
|
# Port for Grafana web interface (default: 3000) |
|
GRAFANA_PORT=3003 |
|
|
|
# Grafana admin password |
|
GRAFANA_PASSWORD=your-password-here |
|
|
|
# ============================================ |
|
# P2P NETWORK CONFIGURATION |
|
# ============================================ |
|
|
|
# Bootstrap node multiaddr (REQUIRED for P2P) |
|
# Format: /ip4/<IP>/tcp/<PORT>/p2p/<PEER_ID> |
|
# Example: /ip4/YOUR.SERVER.IP.HERE/tcp/9100/p2p/YOUR_PEER_ID_HERE |
|
BOOTSTRAP_PEERS=/ip4/159.89.94.56/tcp/9101/p2p/12D3KooWQM5fN7nFxGQeY7acpatnaLteFkmthHyMgG5grwTqAAio,/ip4/143.198.161.179/tcp/9101/p2p/12D3KooWHmvWNk3kVfbh73KRZZ54j5vy2HN2AC4oXFr8JrVNnnQr |
|
|
|
# Private key for P2P identity (optional - will generate if not provided) |
|
# Format: hex-encoded Ed25519 private key |
|
# Leave empty to generate a new key on each start |
|
PRIVATE_KEY=xccfdfv8978f5e7002716ec61b500031aa1669d31df85442e88deb0b02b4b40c5de50b59c00e6f931cbcddf9e338b17bd28afdcc7c38c2746540c7f3fe7b1750091 |
|
# Derived Peer ID: 12Dfdsf43r34rdfr4345345 |
|
RENDEZVOUS_POINT=powerloom-dsv-devnet-alpha |
|
PUBLIC_IP=x.x.x.x |
|
|
|
# ============================================ |
|
# GOSSIPSUB TOPIC CONFIGURATION |
|
# ============================================ |
|
# Configure gossipsub topic names for P2P networking |
|
# These allow customization of topic names for different deployment scenarios |
|
# Snapshot submission topics (discovery and main submissions) |
|
# Format: {prefix}/0 (discovery), {prefix}/all (submissions) |
|
GOSSIPSUB_SNAPSHOT_SUBMISSION_PREFIX=/powerloom/dsv-devnet-alpha/snapshot-submissions |
|
|
|
# Finalized batch topics (discovery and batch exchange) |
|
# Format: {prefix}/0 (discovery), {prefix}/all (batches) |
|
GOSSIPSUB_FINALIZED_BATCH_PREFIX=/powerloom/dsv-devnet-alpha/finalized-batches |
|
|
|
# Validator consensus topics |
|
# Presence topic for validator heartbeat and discovery |
|
GOSSIPSUB_VALIDATOR_PRESENCE_TOPIC=/powerloom/dsv-devnet-alpha/validator/presence |
|
|
|
# Consensus voting and proposal topics |
|
# Used for validator consensus and batch aggregation coordination |
|
GOSSIPSUB_CONSENSUS_VOTES_TOPIC=/powerloom/dsv-devnet-alpha/consensus/votes |
|
GOSSIPSUB_CONSENSUS_PROPOSALS_TOPIC=/powerloom/dsv-devnet-alpha/consensus/proposals |
|
|
|
# ============================================ |
|
# COMPONENT TOGGLES |
|
# ============================================ |
|
|
|
# Enable/disable individual components (true/false) |
|
# P2P gossipsub listener |
|
ENABLE_LISTENER=true |
|
# Redis queue processor |
|
ENABLE_DEQUEUER=true |
|
# Event monitor for EpochReleased events |
|
ENABLE_EVENT_MONITOR=true |
|
# Batch finalizer |
|
ENABLE_FINALIZER=true |
|
# Consensus voting |
|
ENABLE_BATCH_AGGREGATION=true |
|
|
|
# ============================================ |
|
# BATCH AGGREGATION CONFIGURATION |
|
# ============================================ |
|
# ⚠️ IMPORTANT: These values are for TESTING ONLY! |
|
# In production/mainnet, ALL aggregation parameters MUST be read from: |
|
# - Protocol State Contract |
|
# - Data Market Contracts |
|
# Environment variables CANNOT override on-chain parameters in production |
|
|
|
# Batch aggregation parameters (TESTING ONLY - mainnet reads from contracts) |
|
# Percentage of validators required for consensus (67%) |
|
VOTING_THRESHOLD=0.67 |
|
# Minimum number of validators for valid consensus |
|
MIN_VALIDATORS=3 |
|
# Timeout for consensus voting in seconds (5 minutes) |
|
CONSENSUS_TIMEOUT=300 |
|
# How often to check consensus status |
|
CONSENSUS_INTERVAL=60 |
|
|
|
# Level 2 Aggregation Window (Network-wide batch collection) |
|
# Time to wait for validator finalizations before aggregating network consensus |
|
# First remote batch arrival starts timer, additional batches collected during window |
|
# Window expiration triggers final Level 2 aggregation combining all validator views |
|
# IMPORTANT: This should be coordinated with contract submission window timing |
|
# If contract has preSubmissionWindow=0, this gives validators time to aggregate votes |
|
# before submissions can start (since submissions can start immediately after epoch release) |
|
AGGREGATION_WINDOW_SECONDS=30 |
|
|
|
# Validator settings (TESTING ONLY - mainnet reads from contracts) |
|
# Minimum POWER tokens required to be a validator |
|
VALIDATOR_STAKE_THRESHOLD=1000 |
|
# Maximum stake considered for voting weight |
|
VALIDATOR_MAX_STAKE=100000 |
|
|
|
# Consensus monitoring and logging |
|
# Logging detail for consensus process |
|
CONSENSUS_LOGGING_LEVEL=info |
|
# Enable detailed metrics for consensus tracking |
|
CONSENSUS_METRICS_ENABLED=true |
|
|
|
# ============================================ |
|
# DEQUEUER CONFIGURATION |
|
# ============================================ |
|
|
|
# For Docker Compose scaling (number of dequeuer containers) |
|
DEQUEUER_REPLICAS=1 |
|
|
|
# ============================================ |
|
# FINALIZER CONFIGURATION |
|
# (ONLY NEEDED IF ENABLE_FINALIZER=true) |
|
# ============================================ |
|
# The finalizer component handles batch finalization and storage |
|
# These settings are inherited from the centralized sequencer |
|
# and are NOT needed for basic submission listening/processing |
|
|
|
# Number of finalizer instances (for redundancy) |
|
FINALIZER_REPLICAS=1 |
|
|
|
# Storage provider: ipfs, arweave, filecoin |
|
# NOTE: Requires corresponding storage service to be running |
|
STORAGE_PROVIDER=ipfs |
|
|
|
# IPFS node address (ONLY if using IPFS for finalized batch storage) |
|
# You need to run an IPFS node separately if finalizer is enabled |
|
IPFS_HOST=/dns/ipfs/tcp/5001 |
|
# For Docker: use ipfs:5001 |
|
|
|
# IPFS Port Configuration (for Docker deployment) |
|
# ============================================ |
|
# Controls which ports are exposed from the IPFS container |
|
# Only change if you have port conflicts on the host system |
|
|
|
# IPFS API port (required for DSV components to communicate with IPFS) |
|
# Default: 5001 |
|
IPFS_API_PORT=5001 |
|
|
|
# IPFS Swarm port (P2P networking - allows other IPFS nodes to connect) |
|
# Default: 4001 |
|
IPFS_SWARM_PORT=4001 |
|
|
|
# ============================================ |
|
# IPFS CLEANUP CONFIGURATION (for local IPFS service) |
|
# ============================================ |
|
# These settings only apply when using the built-in IPFS service (./dsv.sh start --with-ipfs) |
|
# Automatically unpins old CIDs to prevent storage bloat |
|
|
|
# Maximum age for pins before cleanup (in days) |
|
# CIDs older than this will be unpinned automatically |
|
IPFS_CLEANUP_MAX_AGE_DAYS=7 |
|
|
|
# Cleanup interval (in hours) |
|
# How often to run the cleanup process |
|
IPFS_CLEANUP_INTERVAL_HOURS=72 # Every 3 days |
|
|
|
|
|
# IPFS Data Directory Configuration |
|
# ============================================ |
|
# Controls where IPFS data is stored on the host system |
|
# This allows mounting IPFS data on large partitions or dedicated storage |
|
|
|
# Host directory for IPFS data storage |
|
# Default: /data/ipfs (use large partition when available) |
|
# For production: Set to your high-capacity storage mount point |
|
IPFS_DATA_DIR=/data/ipfs |
|
|
|
# Example configurations: |
|
# - Large partition: IPFS_DATA_DIR=/mnt/storage/ipfs |
|
# - Dedicated disk: IPFS_DATA_DIR=/media/nvme1n1/ipfs |
|
# - Default location: IPFS_DATA_DIR=/data/ipfs |
|
|
|
# Data availability layer: none, eigenda, celestia, avail |
|
# For future integration with DA layers |
|
DA_PROVIDER=none |
|
|
|
# RPC Configuration (Required for event monitoring and contract interactions) |
|
# All RPC interactions in this component are with the Powerloom protocol chain |
|
POWERLOOM_RPC_NODES=https://rpc-devnet.powerloom.dev |
|
# Optional archive nodes for historical queries |
|
POWERLOOM_ARCHIVE_RPC_NODES=[] |
|
|
|
# Source Chain RPC Configuration (JSON array) |
|
# where data is sourced from (e.g., Ethereum, Polygon) |
|
SOURCE_RPC_NODES=<eth-node-url> |
|
# Source Archive nodes (optional, JSON array) |
|
SOURCE_ARCHIVE_RPC_NODES=[] |
|
|
|
# Protocol State Contract (REQUIRED for event monitoring) |
|
PROTOCOL_STATE_CONTRACT=0x3B5A0FB70ef68B5dd677C7d614dFB89961f97401 |
|
|
|
# Data Market Addresses (JSON array or comma-separated) |
|
# These are the markets this sequencer will monitor |
|
DATA_MARKET_ADDRESSES=0xb5cE2F9B71e785e3eC0C45EDE06Ad95c3bb71a4d |
|
|
|
# ============================================ |
|
# EVENT MONITORING CONFIGURATION |
|
# ============================================ |
|
|
|
# ABI Directory Configuration |
|
# Base directory where all ABI JSON files are located |
|
# Defaults: /root/abi (Docker), ./abi (local), /app/abi (alternative Docker) |
|
ABI_DIR=/root/abi |
|
|
|
# Path to the Legacy Protocol State Contract ABI JSON file |
|
# This file contains the ABI for parsing EpochReleased events from legacy contracts |
|
# Note: This path is relative to ABI_DIR if ABI_DIR is set, otherwise absolute |
|
CONTRACT_ABI_PATH=./abi/LegacyProtocolState.json |
|
|
|
# Path to VPA (ValidatorPriorityAssigner) contract ABI for priority monitoring |
|
VPAContractABIPath=./abi/ValidatorPriorityAssigner.json |
|
|
|
# Level 1 Finalization Delay in seconds (used when snapshot commit/reveal is disabled) |
|
# When snapshot commit/reveal windows are disabled, Level 1 finalization triggers after this delay. |
|
# This delay must be BEFORE P1 window closes to allow time for Level 1 + Level 2 aggregation. |
|
# Legacy env var SUBMISSION_WINDOW_DURATION is still supported for backward compatibility. |
|
LEVEL1_FINALIZATION_DELAY_SECONDS=20 |
|
|
|
# Maximum concurrent submission windows |
|
MAX_CONCURRENT_WINDOWS=100 |
|
|
|
# Event polling interval in seconds |
|
EVENT_POLL_INTERVAL=1 |
|
|
|
# Starting block for event monitoring (0 = latest) |
|
EVENT_START_BLOCK=0 |
|
|
|
# Number of blocks to process in batch |
|
EVENT_BLOCK_BATCH_SIZE=10 |
|
# ============================================ |
|
# IDENTITY & VERIFICATION |
|
# ============================================ |
|
|
|
# Full node addresses (JSON array or comma-separated) |
|
# These addresses bypass certain verification checks by the dequeuer |
|
FULL_NODE_ADDRESSES= |
|
|
|
# Skip identity verification (for testing only) |
|
SKIP_IDENTITY_VERIFICATION=true |
|
|
|
# Check for flagged snapshotters |
|
CHECK_FLAGGED_SNAPSHOTTERS=true |
|
|
|
# Verification cache TTL in seconds |
|
VERIFICATION_CACHE_TTL=600 |
|
|
|
# ============================================ |
|
# WORKER CONFIGURATION |
|
# ============================================ |
|
|
|
# Number of dequeuer workers (parallel submission processors) |
|
# NOTE: Now supports multiple submission formats (single & P2PSnapshotSubmission) |
|
DEQUEUER_WORKERS=1 |
|
|
|
# Number of finalizer workers (parallel batch processors) |
|
# Workers now support camelCase data models and batch processing |
|
FINALIZER_WORKERS=5 |
|
|
|
# Batch size for parallel finalization (projects per batch part) |
|
# Supports flexible batch sizes for high and low volume epochs |
|
FINALIZATION_BATCH_SIZE=20 |
|
|
|
# Conversion strategy for submission format (default: auto-detect) |
|
# Options: 'auto' (recommended), 'single', 'batch' |
|
SUBMISSION_FORMAT_STRATEGY=auto |
|
|
|
# ============================================ |
|
# DEDUPLICATION CONFIGURATION |
|
# ============================================ |
|
|
|
# Enable deduplication |
|
DEDUP_ENABLED=true |
|
|
|
# Local cache size for deduplication |
|
DEDUP_LOCAL_CACHE_SIZE=10000 |
|
|
|
# Deduplication TTL in seconds |
|
DEDUP_TTL_SECONDS=7200 |
|
|
|
# ============================================ |
|
# PERFORMANCE TUNING |
|
# ============================================ |
|
|
|
# Connection manager settings |
|
CONN_MANAGER_LOW_WATER=100 |
|
CONN_MANAGER_HIGH_WATER=400 |
|
|
|
# Gossipsub heartbeat interval in milliseconds |
|
GOSSIPSUB_HEARTBEAT_MS=700 |
|
|
|
# Maximum submissions per epoch |
|
MAX_SUBMISSIONS_PER_EPOCH=100 |
|
|
|
# ============================================ |
|
# DEBUGGING & MONITORING |
|
# ============================================ |
|
|
|
# Enable debug logging (true/false) |
|
DEBUG_MODE=true |
|
LOG_LEVEL=debug |
|
|
|
# Metrics port for Prometheus |
|
METRICS_PORT=9090 |
|
METRICS_ENABLED=false |
|
|
|
# Slack webhook for alerts (optional) |
|
SLACK_WEBHOOK_URL= |
|
|
|
# Enable advanced pipeline health monitoring |
|
# Provides more detailed alerts and tracking for batch processing stages |
|
PIPELINE_HEALTH_MONITORING=false |
|
|
|
# API configuration |
|
API_HOST=0.0.0.0 |
|
API_PORT=8080 |
|
API_AUTH_TOKEN= |
|
|
|
# ============================================ |
|
# NEW CONTRACT CONFIGURATION (VPA-ENABLED DEPLOYMENT) |
|
# ============================================ |
|
|
|
# Enable submission to new VPA-enabled contracts |
|
# When true, DSV submits to new contracts via relayer-py |
|
# When false (default), DSV does NO contract submission |
|
# Legacy contracts are handled by separate snapshotter-lite-v2 workflow directly to centralized sequencer |
|
USE_NEW_CONTRACTS=true |
|
|
|
# New Protocol State contract address (VPA-enabled) |
|
# This contract has ValidatorPriorityAssigner linked internally |
|
NEW_PROTOCOL_STATE_CONTRACT=0xC9e7304f719D35919b0371d8B242ab59E0966d63 |
|
|
|
# New Data Market contract address (VPA-enabled) |
|
# This contract works with the new Protocol State contract |
|
NEW_DATA_MARKET_CONTRACT=0xb6c1392944a335b72b9e34f9D4b8c0050cdb511f |
|
|
|
# relayer-py service endpoint for submitting batches to new contracts |
|
# When USE_NEW_CONTRACTS=true, aggregator calls relayer-py HTTP API |
|
RELAYER_PY_ENDPOINT=http://relayer-py:8080 |
|
|
|
# Anchor chain ID for relayer-py transaction signing |
|
# Chain ID of the Powerloom protocol chain (default: 11167 for devnet) |
|
# Required for relayer-py to sign transactions correctly |
|
ANCHOR_CHAIN_ID=11167 |
|
|
|
# Minimum signer balance check for relayer-py (default: 0 to disable for devnet) |
|
# Set to 0 to disable balance checking, or set a minimum ETH amount (e.g., 0.1) |
|
MIN_SIGNER_BALANCE_ETH=0 |
|
|
|
# Enable on-chain submission (required for VPA client initialization) |
|
# When true, aggregator initializes VPA client for priority checking |
|
# When false, VPA client is not initialized and priority checks are skipped |
|
# Must be true when USE_NEW_CONTRACTS=true for VPA-based submissions |
|
ENABLE_ONCHAIN_SUBMISSION=true |
|
|
|
# VPA validator address for priority checking |
|
# This validator's address to check if it has priority for current epoch |
|
# Required when USE_NEW_CONTRACTS=true for priority-based submissions |
|
# TODO: Fill in your validator's Ethereum address |
|
VPA_VALIDATOR_ADDRESS= |
|
# VPA contract address for priority monitoring (ValidatorPriorityAssigner) |
|
# This is automatically fetched from ProtocolState contract - DO NOT SET |
|
|
|
# ============================================ |
|
# VPA RELAYER-PY MULTI-SIGNER CONFIGURATION (ADVANCED) |
|
# ============================================ |
|
|
|
# Multi-signer configuration for relayer-py transaction signing |
|
# This is OPTIONAL - relayer-py will use its own settings.json if not provided |
|
# Most users should configure signers in relayer-py/settings.json directly |
|
|
|
# Multiple authorized signer addresses (comma-separated) |
|
# These addresses are authorized to submit batches to VPA contracts |
|
VPA_SIGNER_ADDRESSES=<your-validator-address>,<your-signer-address> |
|
|
|
# Private keys for the signers (comma-separated, MUST match order above) |
|
# WARNING: Keep these secure and never commit to version control |
|
VPA_SIGNER_PRIVATE_KEYS=<your-validator-private-key>,<your-signer-private-key> |
|
|
|
# ============================================ |
|
# QUICK START CONFIGURATIONS |
|
# ============================================ |
|
|
|
# For LOCAL TESTING (everything on one machine): |
|
# - Use defaults above |
|
# - Set REDIS_ADDR=localhost:6379 |
|
# - Leave PRIVATE_KEY empty (will auto-generate) |
|
# - Set DEBUG_MODE=true |
|
|
|
# For DOCKER COMPOSE: |
|
# - Set REDIS_ADDR=redis:6379 |
|
# - Set IPFS_HOST=ipfs:5001 |
|
# - Use the bootstrap multiaddr above |
|
|
|
# For PRODUCTION: |
|
# - Generate unique PRIVATE_KEY for each instance |
|
# - Set proper RPC_URL with your API key |
|
# - Set DEBUG_MODE=false |
|
# - Configure STORAGE_PROVIDER and DA_PROVIDER as needed |
|
|
|
# For TESTING New Contracts (DSV submission enabled): |
|
# - Set USE_NEW_CONTRACTS=true |
|
# - Configure NEW_PROTOCOL_STATE_CONTRACT and NEW_DATA_MARKET_CONTRACT addresses |
|
# - Start with ./dsv.sh start --with-vpa --with-ipfs |
|
# (relayer-py will use its default settings.json) |
|
# IMPORTANT: DSV does NO contract submission when USE_NEW_CONTRACTS=false |
|
|
|
# For TESTING (Environment-based Signers): |
|
# - Set USE_NEW_CONTRACTS=true |
|
# - Configure NEW_*_CONTRACT addresses |
|
# - Set RELAYER_SIGNER_ADDRESSES and RELAYER_SIGNER_PRIVATE_KEYS (comma-separated) |
|
# - Start with ./dsv.sh start --with-vpa --with-ipfs |
|
# (signers configured via environment variables) |
|
|
|
# For PRODUCTION: |
|
# - Configure relayer-py/settings.json with production signers and RPC endpoints |
|
# - Or use RELAYER_SIGNER_* environment variables for automated deployment |
|
# - Monitor submission success rates via aggregator and relayer-py logs |
|
# - Set DEBUG_MODE=false |
|
# - Configure STORAGE_PROVIDER and DA_PROVIDER as needed |