Skip to content

Instantly share code, notes, and snippets.

@viniciusjssouza
Created February 27, 2026 21:59
Show Gist options
  • Select an option

  • Save viniciusjssouza/ae257d99fedf6e6cf2223dad106eb001 to your computer and use it in GitHub Desktop.

Select an option

Save viniciusjssouza/ae257d99fedf6e6cf2223dad106eb001 to your computer and use it in GitHub Desktop.
Hedera low lever architecture

Here are the wiring diagrams split by stage. Boxes are repeated where a module participates in multiple stages.


Stage 1 — gRPC Request Processing (Client → Transaction Pool)

flowchart TD
    CLIENT["🌐 External Client\n(gRPC call)"]

    subgraph GRPC_SERVER["GrpcServerManager (NettyGrpcServerManager)"]
        PORT_PLAIN["Plain port\n(GrpcConfig.port)"]
        PORT_TLS["TLS port\n(GrpcConfig.tlsPort)"]
        PORT_NODE_OP["Node Operator port\n(GrpcConfig.nodeOperatorPort)"]
    end

    subgraph SERVICE_BUILDER["GrpcServiceBuilder\n(per gRPC service, e.g. CryptoService, TokenService…)"]
        TX_METHOD["TransactionMethod\n(one per transaction RPC)"]
        QUERY_METHOD["QueryMethod\n(one per query RPC)"]
    end

    subgraph INGEST["IngestWorkflowImpl\n(submitTransaction)"]
        PRE_CHECK["0. IngestChecker::verifyReadyForTransactions\n(node state / platform status check)"]

        subgraph RUN_ALL_CHECKS["IngestChecker::runAllChecks"]
            PARSE["1. TransactionChecker::parseSignedAndCheck\n(parse protobuf, structural + syntax checks)"]
            NODE_CHECK["1a. verify transaction sent to this node"]
            TIMEBOX["2. TransactionChecker::checkTimeBox\n(valid start + duration window)"]
            DEDUP_IC["3. DeduplicationCache::contains\n(reject known duplicate txn IDs)"]
            THROTTLE["4. checkThrottles\n(SynchronizedThrottleAccumulator — reject if over capacity)"]
            PURE["4a. dispatcher::dispatchPureChecks\n(stateless semantic checks)"]
            PAYER["5. SolvencyPreCheck::getPayerAccount\n(look up payer account)"]
            SIG_VERIFY["6. SignatureVerifier::verify\n(verify payer cryptographic signature)"]
            SOLVENCY["7. SolvencyPreCheck::checkSolvency\n(payer account balance ≥ fees)"]

            PARSE --> NODE_CHECK --> TIMEBOX --> DEDUP_IC --> THROTTLE --> PURE --> PAYER --> SIG_VERIFY --> SOLVENCY
        end

        SUBMIT["SubmissionManager::submit\n(final dedup check + submitApplicationTransaction)"]
    end

    subgraph TX_POOL["TransactionPoolNexus"]
        POOL_IN["submitApplicationTransaction(serializedSignedTx)"]
    end

    subgraph QUERY_FLOW["QueryWorkflow\n(query path — read-only, no pool)"]
        QUERY_HANDLER["handleQuery\n(read-only state access, return response)"]
    end

    %% Client → gRPC server ports
    CLIENT -->|"gRPC request (plain)"| PORT_PLAIN
    CLIENT -->|"gRPC request (TLS)"| PORT_TLS
    CLIENT -->|"gRPC request (node-op)"| PORT_NODE_OP

    %% gRPC server → service builder methods
    PORT_PLAIN --> TX_METHOD
    PORT_PLAIN --> QUERY_METHOD
    PORT_TLS --> TX_METHOD
    PORT_TLS --> QUERY_METHOD
    PORT_NODE_OP --> TX_METHOD
    PORT_NODE_OP --> QUERY_METHOD

    %% Transaction path
    TX_METHOD -->|"raw bytes (BufferedData)"| PRE_CHECK
    PRE_CHECK --> RUN_ALL_CHECKS
    SOLVENCY --> SUBMIT
    SUBMIT -->|"transactionPool.submitApplicationTransaction"| POOL_IN

    %% Query path (separate, no pool)
    QUERY_METHOD -->|"raw bytes (BufferedData)"| QUERY_HANDLER

    %% Failure paths
    PRE_CHECK -->|"PLATFORM_NOT_ACTIVE\n→ PreCheckException"| FAIL["❌ PreCheckException\n→ gRPC error response"]
    DEDUP_IC -->|"DUPLICATE_TRANSACTION"| FAIL
    THROTTLE -->|"BUSY / throttled\n(throttle capacity reclaimed on failure)"| FAIL
    SIG_VERIFY -->|"INVALID_SIGNATURE"| FAIL
    SOLVENCY -->|"INSUFFICIENT_PAYER_BALANCE"| FAIL
    SUBMIT -->|"DUPLICATE_TRANSACTION\nor PLATFORM_TRANSACTION_NOT_CREATED"| FAIL

    %% Pool → downstream (Stage 2)
    POOL_IN -->|"transactions available\nfor event creation\n(Stage 2 — EventCreationManager)"| _EC["DefaultEventCreatorModule\n(see Stage 2)"]
Loading

Key Design Points for Stage 1

Step Component Purpose
0 IngestChecker::verifyReadyForTransactions Reject if platform not ACTIVE
1 TransactionChecker::parseSignedAndCheck Parse protobuf, structural + syntax validation
1a node check Reject if transaction not addressed to this node
2 TransactionChecker::checkTimeBox Reject expired or far-future transactions
3 DeduplicationCache::contains Reject already-seen transaction IDs
4 checkThrottles Rate-limit by function type (TPS throttle)
4a dispatcher::dispatchPureChecks Stateless semantic checks (no state access)
5 SolvencyPreCheck::getPayerAccount Look up payer account in state
6 SignatureVerifier::verify Cryptographic signature verification
7 SolvencyPreCheck::checkSolvency Payer account has enough balance for fees
SubmissionManager::submit Final dedup check + submit to TransactionPoolNexus
  • Throttle capacity is reclaimed if any subsequent check fails after step 4 — so throttle slots are not wasted on rejected transactions.
  • SubmissionManager performs a second dedup check (synchronized) before calling transactionPool.submitApplicationTransaction, to prevent duplicate submissions even under concurrent ingest.
  • Query methods bypass the ingest workflow entirely and go directly to QueryWorkflow — they never touch the transaction pool.
  • The three ports (plain, TLS, node-operator) all route through the same GrpcServiceBuilder / IngestWorkflow pipeline; the difference is only at the transport/auth layer.

Stage 2 — Event Creation, Intake & Persistence (until ready for gossip)

flowchart TB
    %% ── External triggers ──
    HB_EC["⏱ Heartbeat\n(creationAttemptRate)"]
    TX_POOL["TransactionPoolNexus\n(Stage 1)"]
    GOSSIP_RECV["GossipModule\n(receivedEventOutputWire)\nsee Stage 3"]

    %% ── Cross-cutting inputs (from Stage 6 / Stage 3) ──
    PLAT_STATUS["PlatformMonitor\n(PlatformStatus)\nsee Stage 6"]
    HEALTH["HealthMonitor\n(Health Duration)\nsee Stage 6"]
    EV_WINDOW["EventWindowManager\n(EventWindow)\nsee Stage 6"]
    SYNC_PROG["GossipModule\n(SyncProgress)\nsee Stage 3"]

    subgraph EVENT_CREATOR["DefaultEventCreatorModule (EventCreationManager)"]
        direction TB
        CREATION_RULES["Creation gates (DefaultEventCreationManager)\n• PlatformStatusRule — must be ACTIVE\n• PlatformHealthRule — system healthy\n• MaximumRateRule — max rate\n• SyncLagRule — sync lag\n• QuiescenceRule — quiescence"]
        EC_MAYBE["TipsetEventCreator::maybeCreateEvent\n(selects parents, pulls txns from pool, signs)"]
        CREATION_RULES -->|"isEventCreationPermitted()"| EC_MAYBE
    end

    subgraph EVENT_INTAKE["DefaultEventIntakeModule"]
        direction TB
        EI_IN_UNHASHED["EventHasher::hashEvent\n(gossip-received · PCES replay)"]
        EI_IN_NONVAL["InternalEventValidator::validateEvent\n(self-created — already hashed)"]
        DEDUP["EventDeduplicator::handleEvent"]
        SIG_VAL["EventSignatureValidator::validateSignature"]
        ORPHAN["OrphanBuffer::handleEvent\n(buffers until parents known)"]
        EI_OUT["✅ validated events"]
        EI_IN_UNHASHED -->|"validateEvent"| EI_IN_NONVAL
        EI_IN_NONVAL -->|"handleEvent"| DEDUP
        DEDUP -->|"validateSignature"| SIG_VAL
        SIG_VAL -->|"handleEvent"| ORPHAN
        ORPHAN --> EI_OUT
    end

    subgraph PCES["PCESModule (Pre-Consensus Event Stream)"]
        direction TB
        PCES_IN["write to disk\n(eventsToWriteInputWire)"]
        PCES_OUT_WRITTEN["✅ confirmed written to disk\n(writtenEventsOutputWire)"]
        PCES_OUT_REPLAY["replay on restart\n(pcesEventsToReplay)"]
        PCES_IN --> PCES_OUT_WRITTEN
    end

    %% ── Cross-cutting inputs into EventCreator ──
    HB_EC -->|"maybeCreateEvent (OFFER)"| EC_MAYBE
    TX_POOL -->|"pull via EventTransactionSupplier"| EC_MAYBE
    PLAT_STATUS -->|"updatePlatformStatus"| EVENT_CREATOR
    HEALTH -->|"reportUnhealthyDuration"| EVENT_CREATOR
    EV_WINDOW -->|"setEventWindow (INJECT)"| EVENT_CREATOR
    SYNC_PROG -->|"reportSyncProgress"| EVENT_CREATOR

    %% ── EventWindow also fans into intake & PCES ──
    EV_WINDOW -->|"setEventWindow (INJECT)"| EI_IN_UNHASHED
    EV_WINDOW -->|"updateNonAncientEventBoundary (INJECT)"| PCES_IN

    %% ── Main flow ──
    EC_MAYBE -->|"validateEvent (INJECT)"| EI_IN_NONVAL
    EI_OUT -->|"writeEvent"| PCES_IN
    PCES_OUT_REPLAY -->|"hashEvent"| EI_IN_UNHASHED
    GOSSIP_RECV -->|"hashEvent"| EI_IN_UNHASHED

    %% ── After persistence: fan-out ──
    PCES_OUT_WRITTEN -->|"ConsensusEngine::addEvent"| _HG["HashgraphModule\n(see Stage 4)"]
    PCES_OUT_WRITTEN -->|"synchronizer.addEvent\nrpcProtocol.addEvent (INJECT)"| _G["GossipModule\n(see Stage 3)"]
    PCES_OUT_WRITTEN -->|"EventCreationManager::registerEvent"| EC_MAYBE
Loading

Key Design Points for Stage 2

Step Component Notes
Heartbeat model.buildHeartbeatWire(creationAttemptRate) Drives maybeCreateEvent via OFFER solder
Creation rules DefaultEventCreationManager Gates creation: platform status, health, rate, sync lag, quiescence
Transaction pull TipsetEventCreator via EventTransactionSupplier Pulls pending txns from TransactionPoolNexus at creation time
Self-created events createdEventOutputWirenonValidatedEventsInputWire (INJECT) Skip hashing — already signed; go straight to InternalEventValidator
Gossip-received events GossipModule::receivedEventOutputWireunhashedEventsInputWire Events received from peers via RpcProtocolreceivedEventHandler → full pipeline: hash → validate → deduplicate → verify sig → orphan buffer
PCES replay pcesEventsToReplayunhashedEventsInputWire On restart, replayed events go through full pipeline
Persistence gate validatedEventsOutputWireeventsToWriteInputWirewrittenEventsOutputWire Events only reach Hashgraph, Gossip, and EventCreator after being written to disk
Ready for gossip writtenEventsOutputWireeventToGossipInputWire (INJECT) INJECT prevents blocking if gossip queue is full
Parent registration writtenEventsOutputWireorderedEventInputWire Persisted events registered as eligible parents for future event creation
EventWindow EventWindowDispatcher (WireTransformer) Fans out INJECT to Deduplicator, SignatureValidator, and OrphanBuffer inside intake

Stage 3 — Gossip

flowchart TB
    subgraph PCES["PCESModule\n(see Stage 2)"]
        direction TB
        PCES_OUT_WRITTEN["writtenEventsOutputWire\n(confirmed written to disk)"]
    end

    subgraph GOSSIP["GossipModule (SyncGossipModular)"]
        direction TB
        G_IN_EVENT["eventToGossipInputWire"]
        G_IN_WINDOW["eventWindowInputWire"]
        G_IN_STATUS["platformStatusInputWire"]
        G_IN_HEALTH["healthStatusInputWire"]
        G_OUT_RECV["receivedEventOutputWire"]
        G_OUT_SYNC["syncProgressOutputWire"]

        SYNC["ShadowgraphSynchronizer\n(sync with peers — all events)"]
        BCAST["RpcProtocol::addEvent\n(broadcast — self-created events only)"]
        RECV["← RpcPeerHandler\n(events received from peers)"]

        G_IN_EVENT -->|"synchronizer.addEvent"| SYNC
        G_IN_EVENT -->|"rpcProtocol.addEvent\n(only if selfId == creatorId)"| BCAST
        RECV -->|"receivedEventHandler.accept\n(eventOutput::forward)"| G_OUT_RECV
        G_IN_WINDOW -->|"synchronizer.updateEventWindow"| SYNC
    end

    subgraph EVENT_INTAKE["DefaultEventIntakeModule\n(see Stage 2)"]
        direction TB
        EI_IN_UNHASHED["unhashedEventsInputWire\n→ EventHasher::hashEvent"]
    end

    subgraph EVENT_CREATOR["DefaultEventCreatorModule\n(see Stage 2)"]
        direction TB
        EC_IN_SYNC["syncProgressInputWire\n→ reportSyncProgress"]
    end

    subgraph EVENT_WINDOW_MGR["EventWindowManager\n(see Stage 6)"]
        direction TB
        EWM_OUT["eventWindowOutputWire"]
    end

    subgraph PLATFORM_MONITOR["PlatformMonitor\n(see Stage 6)"]
        direction TB
        PM_OUT["platformStatusOutputWire"]
    end

    subgraph HEALTH_MONITOR["HealthMonitor\n(see Stage 6)"]
        direction TB
        HM_OUT["healthDurationOutputWire"]
    end

    %% Persisted events → Gossip (INJECT to avoid blocking)
    PCES_OUT_WRITTEN -->|"eventToGossipInputWire (INJECT)"| G_IN_EVENT

    %% Received events from peers → Event Intake (full pipeline)
    G_OUT_RECV -->|"hashEvent"| EI_IN_UNHASHED

    %% Sync progress → EventCreator
    G_OUT_SYNC -->|"reportSyncProgress"| EC_IN_SYNC

    %% Cross-cutting inputs to Gossip
    EWM_OUT -->|"eventWindowInputWire (INJECT)"| G_IN_WINDOW
    PM_OUT -->|"platformStatusInputWire (INJECT)\nupdatePlatformStatus on all protocols"| G_IN_STATUS
    HM_OUT -->|"healthStatusInputWire\nrpcProtocol::reportUnhealthyDuration"| G_IN_HEALTH
Loading

Key Design Points for Stage 3

Arrow / Step Component Notes
writtenEventsOutputWireeventToGossipInputWire PCESModuleGossipModule INJECT solder — events only gossiped after being persisted to disk (prevents accidental branching on crash)
synchronizer.addEvent ShadowgraphSynchronizer Called for every event on eventToGossipInputWire; makes the event available for sync exchanges with peers
rpcProtocol.addEvent RpcProtocol Called for every event, but only broadcasts if selfId == creatorId (direct broadcast of self-created events, skipping sync)
RpcPeerHandlerreceivedEventHandler RpcProtocolSyncGossipModular Events received from peers are passed to receivedEventHandler (bound to eventOutput::forward in SyncGossipModular.bind)
receivedEventOutputWireunhashedEventsInputWire GossipModuleEventIntakeModule Gossip-received events enter the full intake pipeline: hash → validate → deduplicate → verify sig → orphan buffer
syncProgressOutputWiresyncProgressInputWire ShadowgraphSynchronizerEventCreatorModule Sync lag reported per peer; EventCreationManager::reportSyncProgress uses it to gate event creation via SyncLagRule
eventWindowInputWire EventWindowManagerGossipModule INJECT; synchronizer.updateEventWindow keeps the shadowgraph aligned with the non-ancient event boundary
platformStatusInputWire PlatformMonitorGossipModule INJECT; calls protocol.updatePlatformStatus on all protocols (HeartbeatProtocol, reconnect, RpcProtocol)
healthStatusInputWire HealthMonitorGossipModule rpcProtocol::reportUnhealthyDuration — throttles sync permits when system is unhealthy

Stage 4 — Pre-handle

flowchart TB
    %% ── Entry point ──
    PCES_OUT["PCESModule\n(writtenEventsOutputWire)\nsee Stage 2"]

    subgraph HASHGRAPH["HashgraphModule (ConsensusEngine)"]
        direction TB
        HG_IN_EVENT["ConsensusEngine::addEvent\n(eventInputWire)"]
        HG_OUT_PRECONS["preconsensusEventOutputWire\n(event added to DAG, not yet at consensus)"]
        HG_IN_EVENT --> HG_OUT_PRECONS
    end

    subgraph PREHANDLER["DefaultTransactionPrehandler"]
        direction TB
        PH_IN["prehandleApplicationTransactions\n(preconsensus event)"]
        PH_GETSTATE["latestStateSupplier.get()\n(latest immutable state)"]
        PH_CALL["consensusStateEventHandler.onPreHandle\n(event, state, stateSignatureTxnCallback)"]
        PH_SIGNAL["event.signalPrehandleCompletion()"]
        PH_OUT["→ Queue of ScopedSystemTransaction\n(returned to StateSignatureCollector)"]
        PH_IN --> PH_GETSTATE --> PH_CALL --> PH_SIGNAL --> PH_OUT
    end

    subgraph HEDERA["Hedera::onPreHandle"]
        direction TB
        HP_STORE["new ReadableStoreFactoryImpl(state)"]
        HP_CREATOR["networkInfo.nodeInfo(event.getCreatorId())\n(null if node left address book)"]
        HP_WORKFLOW["preHandleWorkflow.preHandle\n(storeFactory, creatorInfo, transactions, shortCircuitCallback)"]
        HP_STORE --> HP_CREATOR --> HP_WORKFLOW
    end

    subgraph PRE_HANDLE_WORKFLOW["PreHandleWorkflowImpl::preHandleTransaction\n(per transaction, parallel)"]
        direction TB
        PHW_PARSE["1. TransactionChecker::parseSignedAndCheck\n(parse protobuf, structural checks, node account ID)"]
        PHW_PARSED["2. TransactionChecker::checkParsed + checkTransactionSize\n(node due-diligence checks)"]
        PHW_DEDUP["3. DeduplicationCache::add\n(register txID as seen — not a rejection, just tracking)"]
        PHW_PAYER["4. accountStore.getAccountById(payer)\n(look up payer account)"]
        PHW_PURE["5. TransactionDispatcher::dispatchPureChecks\n(stateless semantic checks via TransactionHandler::pureChecks)"]
        PHW_PREHANDLE["6. TransactionDispatcher::dispatchPreHandle\n(route to correct handler)"]
        PHW_SIGS["7. expandAndVerifySignatures\n(SignatureExpander + SignatureVerifier)"]
        PHW_RESULT["→ PreHandleResult stored as\ntransaction metadata"]
        PHW_PARSE --> PHW_PARSED --> PHW_DEDUP --> PHW_PAYER --> PHW_PURE --> PHW_PREHANDLE --> PHW_SIGS --> PHW_RESULT
    end

    subgraph TX_DISPATCHER["TransactionDispatcher"]
        direction TB
        TD_PURE["dispatchPureChecks → getHandler(txBody)\n→ TransactionHandler::pureChecks(context)"]
        TD_PREHANDLE["dispatchPreHandle → getHandler(txBody)\n→ TransactionHandler::preHandle(context)\n(50+ service-specific handlers)"]
        TD_PURE --> TD_PREHANDLE
    end

    %% Entry: persisted events → ConsensusEngine
    PCES_OUT -->|"ConsensusEngine::addEvent"| HG_IN_EVENT

    %% Pre-consensus event → TransactionPrehandler (platform wiring)
    HG_OUT_PRECONS -->|"prehandleApplicationTransactions"| PH_IN

    %% DefaultTransactionPrehandler → Hedera::onPreHandle
    PH_CALL -->|"onPreHandle(event, state, callback)"| HP_STORE

    %% Hedera → PreHandleWorkflow
    HP_WORKFLOW -->|"preHandle(storeFactory, creatorInfo, txns, callback)"| PHW_PARSE

    %% PreHandleWorkflow → TransactionDispatcher
    PHW_PURE -->|"dispatchPureChecks(context)"| TD_PURE
    PHW_PREHANDLE -->|"dispatchPreHandle(context)"| TD_PREHANDLE
Loading

Key Design Points for Stage 4

Step Component Notes
Entry ConsensusEngine::addEvent Receives events only after PCES writes them to disk (writtenEventsOutputWire)
Pre-consensus output preconsensusEventOutputWire Each event added to the DAG is emitted here; guaranteed to either reach consensus or become stale
Pre-handle entry DefaultTransactionPrehandler::prehandleApplicationTransactions Gets latest immutable state via latestStateSupplier.get(), calls onPreHandle, then signals event.signalPrehandleCompletion()
App-level pre-handle Hedera::onPreHandle Builds ReadableStoreFactoryImpl, resolves creatorInfo (null if node left address book), delegates to preHandleWorkflow.preHandle(...)
Parse TransactionChecker::parseSignedAndCheck Parses protobuf, structural checks, validates node account ID matches creator
Due-diligence checks TransactionChecker::checkParsed + checkTransactionSize Node-level checks; failure charges the submitting node, not the payer
Dedup tracking DeduplicationCache::add Registers txID as seen for receipt queries — not a rejection; deterministic dedup happens at handle time
Payer lookup accountStore.getAccountById(payer) Payer must exist and not be deleted; failure → nodeDueDiligenceFailure
Pure checks TransactionDispatcher::dispatchPureChecksTransactionHandler::pureChecks Stateless semantic checks; result cached in PreHandleContext — not repeated at handle time
Service pre-handle TransactionDispatcher::dispatchPreHandleTransactionHandler::preHandle Service-specific: extracts required non-payer keys, validates business rules without state mutation
Signature expansion & verification SignatureExpander + SignatureVerifier Expands prefix-based sig pairs, submits keys for async crypto verification
Result PreHandleResult stored as transaction metadata Retrieved by HandleWorkflow via getCurrentPreHandleResult(); if missing or wrong config version, pre-handle is re-run at handle time

Stage 5 — Consensus & Handle Workflow (ConsensusEngine → State Pipeline)

flowchart TB
    %% ── Entry point ──
    PCES_OUT["PCESModule\n(writtenEventsOutputWire)\nsee Stage 2"]

    subgraph HASHGRAPH["HashgraphModule (ConsensusEngine)"]
        direction TB
        HG_IN_EVENT["ConsensusEngine::addEvent\n(eventInputWire)"]
        HG_OUT_ROUND["consensusRoundOutputWire\n(ConsensusRound)"]
        HG_IN_EVENT -->|"Consensus algorithm"| HG_OUT_ROUND
    end

    subgraph TRANS_HANDLER["DefaultTransactionHandler"]
        direction TB
        TH_IN["handleConsensusRound\n(ConsensusRound)"]
        TH_UPDATE_STATE["updatePlatformState\n(round data, timestamps)"]
        TH_WAIT_PH["awaitPrehandleCompletion\n(ensure parallel pre-handle finished)"]
        TH_APP_CALL["ConsensusStateEventHandler.onHandleConsensusRound\n(implemented by Hedera)"]
        TH_RUN_HASH["updateRunningEventHash\n(update platform-level hashes)"]
        TH_OUT["✅ TransactionHandlerResult\n(SignedState + systemTransactions)"]
        TH_IN --> TH_UPDATE_STATE --> TH_WAIT_PH --> TH_APP_CALL --> TH_RUN_HASH --> TH_OUT
    end

    subgraph HEDERA["Hedera (App Layer Bridge)"]
        direction TB
        H_ON_ROUND["onHandleConsensusRound\n(Round, State, callback)"]
        H_WORKFLOW["handleWorkflow.handleRound\n(state, round, callback)"]
        H_ON_ROUND --> H_WORKFLOW
    end

    subgraph HANDLE_WORKFLOW["HandleWorkflow::handleRound"]
        direction TB
        HW_LOOP["Iterate ConsensusEvents"]
        HW_PLAT_TXN["handlePlatformTransaction\n(per transaction)"]
        HW_TOP_TXN["parentTxnFactory.createTopLevelTxn\n(retrieves PreHandleResult metadata)"]
        HW_DISPATCH["dispatchProcessor.processDispatch(dispatch)"]
        HW_LOOP --> HW_PLAT_TXN --> HW_TOP_TXN --> HW_DISPATCH
    end

    subgraph DISPATCH_PROCESSOR["DispatchProcessor"]
        direction TB
        DP_PROC["processDispatch(dispatch)\n(validation, fee charging)"]
        DP_HANDLE["dispatcher.dispatchHandle(handleContext)"]
        DP_PROC --> DP_HANDLE
    end

    subgraph TX_DISPATCHER["TransactionDispatcher"]
        direction TB
        TD_HANDLE["dispatchHandle → getHandler(txBody)\n→ TransactionHandler::handle(context)\n(Service-specific logic)"]
    end

    subgraph POST_HANDLE_STATE["Post-handle State Pipeline (PlatformWiring)"]
        direction TB
        GC["StateGarbageCollector::registerState"]
        SSC["StateSignatureCollector::handlePostconsensusSignatures"]
        SH["StateHasher::hashState"]
        SSIGN["StateSigner::signState"]
        ISS["IssDetector::handleState"]
        NOTIF["AppNotifier::sendStateHashedNotification"]
        
        SSC_ADD["StateSignatureCollector::addReservedState\n(adds hashed state to collector)"]

        SH --> SSIGN
        SH --> ISS
        SH --> NOTIF
        SH --> SSC_ADD
    end

    %% Main flow
    PCES_OUT --> HG_IN_EVENT
    HG_OUT_ROUND --> TH_IN
    TH_APP_CALL --> H_ON_ROUND
    H_WORKFLOW --> HW_LOOP
    HW_DISPATCH --> DP_PROC
    DP_HANDLE --> TD_HANDLE

    %% Post-handle wiring (TransactionHandler output)
    TH_OUT --> GC
    TH_OUT --> SSC
    TH_OUT -->|"via SavedStateController"| SH

    %% Cross-cutting connections
    HG_OUT_ROUND -->|"extractEventWindow\n(Stage 6 — EventWindowManager)"| _EWM["EventWindowManager"]
    SSIGN -->|"execution::submitStateSignature"| _EXEC["ExecutionLayer"]
Loading

Key Design Points for Stage 5

Step Component Notes
Round generation ConsensusEngine::addEvent When a round is completed, a ConsensusRound is emitted via consensusRoundOutputWire.
Round Entry DefaultTransactionHandler::handleConsensusRound Entry point for round application at the platform level; updates platform state and hashes.
Sync Gate awaitPrehandleCompletion Ensures parallel PreHandleWorkflow from Stage 4 is finished before handle logic starts.
App entry Hedera::onHandleConsensusRound Hedera implements ConsensusStateEventHandler, bridging platform events to app workflow.
Workflow loop HandleWorkflow::handleRound Iterates through events and transactions, ensuring correct handle order.
Result reuse createTopLevelTxn Retrieves PreHandleResult from transaction metadata, reusing work from Stage 4.
Dispatch DispatchProcessor::processDispatch Orchestrates fee charging, validation, and final handle dispatch.
State Hashing StateHasher::hashState Triggered after TransactionHandler finishes; computes the consensus state hash.
State signing StateSigner::signState Once hashed, the state is signed and signatures are collected by StateSignatureCollector.
GC StateGarbageCollector::registerState Manages the deletion of old states that are no longer needed.

Stage 6 — Cross-cutting: EventWindow, PlatformStatus & Health

flowchart TB
    HB_PM["⏱ Heartbeat (status machine period)"]
    HB_GC["⏱ Heartbeat (GC period)"]
    HB_SS["⏱ Heartbeat (sentinel period)"]
    HG_ROUND["HashgraphModule\n(consensusRoundOutputWire)\nsee Stage 5"]
    SNP_DISK["StateSnapshotManager\n(output)"]
    ISS_NOTIF["IssDetector\n(getOutputWire)"]
    RH["RunningEventHashOverrideWiring\n(runningHashUpdateOutput)"]

    subgraph EVENT_WINDOW_MGR["EventWindowManager"]
        direction TB
        EWM_IN["extractEventWindow\n(from ConsensusRound)"]
        EWM_OUT["eventWindowOutputWire"]
    end

    subgraph PLATFORM_MONITOR["PlatformMonitor"]
        direction TB
        PM_IN_HB["heartbeat"]
        PM_IN_ROUND["consensusRound"]
        PM_IN_DISK["stateWrittenToDisk"]
        PM_IN_ISS["issNotification"]
        PM_OUT["platformStatusOutputWire (PlatformStatus)"]
    end

    subgraph HEALTH_MONITOR["HealthMonitor"]
        direction TB
        HM_OUT["healthDurationOutputWire (Duration)"]
    end

    %% Heartbeats
    HB_PM -->|OFFER| PM_IN_HB
    HB_GC -->|OFFER| GC_HB["StateGarbageCollector::heartbeat"]
    HB_SS -->|OFFER| SS_HB["SignedStateSentinel::checkSignedStates"]

    %% PlatformMonitor inputs
    HG_ROUND --> PM_IN_ROUND
    SNP_DISK --> PM_IN_DISK
    ISS_NOTIF --> PM_IN_ISS

    %% EventWindow fan-out (INJECT to all consumers)
    EWM_OUT -->|INJECT| EI_WIN["EventIntakeModule\n(eventWindowInputWire)"]
    EWM_OUT -->|INJECT| G_WIN["GossipModule\n(eventWindowInputWire)"]
    EWM_OUT -->|INJECT| PCES_WIN["PCESModule\n(eventWindowInputWire)"]
    EWM_OUT -->|INJECT| EC_WIN["EventCreatorModule\n(eventWindowInputWire)"]
    EWM_OUT --> LCN_WIN["LatestCompleteStateNexus\n(updateEventWindow)"]
    EWM_OUT -->|INJECT| BD_WIN["BranchDetector\n(updateEventWindow)"]
    EWM_OUT -->|INJECT| BR_WIN["BranchReporter\n(updateEventWindow)"]

    %% PlatformStatus fan-out
    PM_OUT --> EC_STATUS["EventCreatorModule\n(platformStatusInputWire)"]
    PM_OUT -->|INJECT| HG_STATUS["HashgraphModule\n(platformStatusInputWire)"]
    PM_OUT -->|"ExecutionStatusHandler"| EXEC_STATUS["ExecutionLayer\n(newPlatformStatus)"]
    PM_OUT -->|INJECT| G_STATUS["GossipModule\n(platformStatusInputWire)"]
    PM_OUT -->|INJECT| LCN_STATUS["LatestCompleteStateNexus\n(updatePlatformStatus)"]
    PM_OUT --> NOT_STATUS["AppNotifier\n(sendPlatformStatusChangeNotification)"]

    %% Health fan-out
    HM_OUT --> EC_HEALTH["EventCreatorModule\n(healthStatusInputWire)"]
    HM_OUT --> G_HEALTH["GossipModule\n(healthStatusInputWire)"]
    HM_OUT -->|"executionHealthInput"| EXEC_HEALTH["ExecutionLayer\n(reportUnhealthyDuration)"]

    %% RunningEventHashOverride (legacy)
    RH --> TH_HASH["TransactionHandler\n(updateLegacyRunningEventHash)"]
    RH --> CES_HASH["ConsensusEventStream\n(legacyHashOverride)"]
Loading

,search:


Summary: Stage Relationships

flowchart LR
S0["Stage 1\ngRPC Ingestion\n(Client → TransactionPool)"]
S1["Stage 2\nEvent Ingestion\n(Creation + Intake + PCES)"]
S2["Stage 3\nGossip\n(Send & Receive)"]
S4["Stage 4\nPre-handle\n(ConsensusEngine → TransactionHandler::preHandle)"]
S5["Stage 5\nConsensus & Handle\n(ConsensusRound → TransactionHandler::handle)"]
S6["Stage 6\nCross-cutting\n(EventWindow, Status, Health)"]

S0 -->|"transactions → TransactionPoolNexus\n→ EventCreationManager"| S1
S1 -->|"writtenEvents → Gossip"| S2
S2 -->|"receivedEvents → Intake"| S1
S1 -->|"writtenEvents → ConsensusEngine::addEvent"| S4
S1 -->|"writtenEvents → ConsensusEngine::addEvent"| S5
S5 -->|"consensusRound → EventWindowMgr\nstateWrittenToDisk → PlatformMonitor\nissNotification → PlatformMonitor"| S6
S6 -->|"EventWindow, PlatformStatus,\nHealth → all modules"| S1
S6 -->|"EventWindow, PlatformStatus,\nHealth → all modules"| S2
S6 -->|"EventWindow, PlatformStatus,\nHealth → all modules"| S4
S6 -->|"EventWindow, PlatformStatus,\nHealth → all modules"| S5
Loading

Key Design Principles Visible Across Stages

  • Persistence gate (PCES): Events only reach ConsensusEngine, Gossip, and EventCreator after being written to disk (writtenEventsOutputWire). This prevents branching on crash.
  • INJECT solder type: Used for high-priority, non-blocking delivery (EventWindow, PlatformStatus, Gossip events). Regular solder blocks if the downstream queue is full.
  • EventWindow as cross-cutting concern: Fans out to 7 consumers (all INJECT), keeping all modules aligned on the current non-ancient event boundary.
  • PlatformStatus as cross-cutting concern: Fans out to 6 consumers, gating event creation, gossip, consensus, and state management.
  • Pre-handle receives from ConsensusEngine, not Gossip: preconsensusEventOutputWire guarantees every pre-handled event either reaches consensus or becomes stale — no wasted work.
  • Result reuse via metadata: Stage 4 (Pre-handle) attaches PreHandleResult to the transaction, which Stage 5 (Handle) retrieves to avoid repeating parsing and signature verification.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment