Skip to content

Instantly share code, notes, and snippets.

@denniswon
Created March 5, 2026 19:24
Show Gist options
  • Select an option

  • Save denniswon/b437d20946c761bd997839f52e0c02fe to your computer and use it in GitHub Desktop.

Select an option

Save denniswon/b437d20946c761bd997839f52e0c02fe to your computer and use it in GitHub Desktop.
Analysis: Database Schema Changes Required for Multichain Gateway
Now I have the complete picture. Let me compile the analysis.
---
Analysis: Database Schema Changes Required for Multichain Gateway
What the PRs Changed (Summary)
┌──────┬─────────────────────────────────────────────┬───────────────────────────────────────┐
│ PR │ Title │ DB Changes │
├──────┼─────────────────────────────────────────────┼───────────────────────────────────────┤
│ #411 │ Cross-chain gateway Phase 1 │ None — pure service-layer refactoring │
├──────┼─────────────────────────────────────────────┼───────────────────────────────────────┤
│ #415 │ Migrate domain newt.foundation → newton.xyz │ None — URL strings only │
├──────┼─────────────────────────────────────────────┼───────────────────────────────────────┤
│ #416 │ Fix stale domains + timeout configs │ None — config + docs only │
└──────┴─────────────────────────────────────────────┴───────────────────────────────────────┘
None of the three PRs introduced database migrations. The multichain gateway was shipped as a service-layer change (per-chain
ChainService, MultiChainConfig, chain watcher Redis), but the database schema still assumes a single-chain world.
---
Current Schema (3 Gateway Tables)
┌──────────────────────┬─────────────────┬───────────────────────────────────────────────────────────────────────────────────────────┐
│ Table │ chain_id │ Multichain Issue │
│ │ column? │ │
├──────────────────────┼─────────────────┼───────────────────────────────────────────────────────────────────────────────────────────┤
│ api_keys │ No │ Likely OK — API keys are gateway-level, not per-chain │
├──────────────────────┼─────────────────┼───────────────────────────────────────────────────────────────────────────────────────────┤
│ policy_client_secret │ No │ Keyed by (policy_client_address, policy_data_address) — same address can exist on │
│ │ │ different chains │
├──────────────────────┼─────────────────┼───────────────────────────────────────────────────────────────────────────────────────────┤
│ encrypted_data_refs │ No │ Keyed by (sender_address, policy_client_address) — same address pair can exist on │
│ │ │ different chains │
└──────────────────────┴─────────────────┴───────────────────────────────────────────────────────────────────────────────────────────┘
Plus the indexer DB (task table, external database) which already receives chain_id as a parameter but maps it to a binary network enum
(mainnet/testnet).
---
Required Changes
★ Insight ─────────────────────────────────────
The core issue is address collision across chains. In a single-chain world, a policy_client_address like 0xABC... uniquely identifies one
contract. In multichain, 0xABC... on Ethereum vs 0xABC... on Base are different contracts with different policies, different owners, and
different secrets. Without chain_id scoping, the gateway silently serves Base secrets for an Ethereum request (or vice versa) if the
addresses happen to match — particularly likely with CREATE2 deterministic deployments.
─────────────────────────────────────────────────
1. policy_client_secret — Needs chain_id (High Priority)
Current unique key: (policy_client_address, policy_data_address)
Problem: A policy client at 0xABC on Sepolia (chain 11155111) and 0xABC on Base Sepolia (chain 84532) are different contracts with
different KMS-encrypted secrets. The current schema can only store one secrets blob per address pair.
Fix: Add chain_id BIGINT NOT NULL and change the unique constraint:
ALTER TABLE policy_client_secret
ADD COLUMN chain_id BIGINT NOT NULL DEFAULT 0;
-- Backfill: set existing rows to the source chain_id from config
UPDATE policy_client_secret SET chain_id = <SOURCE_CHAIN_ID> WHERE chain_id = 0;
-- Replace the unique constraint
ALTER TABLE policy_client_secret
DROP CONSTRAINT policy_client_secret_unique;
ALTER TABLE policy_client_secret
ADD CONSTRAINT policy_client_secret_unique
UNIQUE (chain_id, policy_client_address, policy_data_address);
-- Update the index
DROP INDEX IF EXISTS idx_policy_client_secret_policy_client_policy_data;
CREATE INDEX idx_policy_client_secret_chain_policy_client_policy_data
ON policy_client_secret(chain_id, policy_client_address, policy_data_address);
Rust changes needed:
- WasmSecretRepository::get_by_policy_data_and_policy_client() → add chain_id param
- WasmSecretRepository::upsert() → add chain_id param
- All callers in secrets.rs, simulate_policy_data.rs, simulate_policy.rs
2. encrypted_data_refs — Needs chain_id (High Priority)
Current key: UUID primary key, indexed by (sender_address, policy_client_address)
Problem: Encrypted data is uploaded scoped to a policy client. If the same policy client address exists on multiple chains, the
get_by_sender_and_policy_client query returns refs from all chains. The HPKE AAD already includes chain_id
(keccak256(abi.encodePacked(policy_client, chain_id))), so a cross-chain mix would also fail decryption — but we shouldn't rely on
decryption failure as the filter mechanism.
Fix: Add chain_id BIGINT NOT NULL:
ALTER TABLE encrypted_data_refs
ADD COLUMN chain_id BIGINT NOT NULL DEFAULT 0;
-- Backfill
UPDATE encrypted_data_refs SET chain_id = <SOURCE_CHAIN_ID> WHERE chain_id = 0;
-- Add index for chain-scoped lookups
DROP INDEX IF EXISTS idx_encrypted_data_refs_sender_policy;
CREATE INDEX idx_encrypted_data_refs_chain_sender_policy
ON encrypted_data_refs (chain_id, sender_address, policy_client_address);
DROP INDEX IF EXISTS idx_encrypted_data_refs_policy_client;
CREATE INDEX idx_encrypted_data_refs_chain_policy_client
ON encrypted_data_refs (chain_id, policy_client_address);
Rust changes needed:
- EncryptedDataRefRepository::insert() → add chain_id param
- EncryptedDataRefRepository::get_by_sender_and_policy_client() → add chain_id param
- EncryptedDataRefRecord struct → add chain_id: u64 field
- Upload handler (privacy.rs) and fetch path (sync.rs) need to pass chain_id
3. api_keys — No change needed
API keys are gateway-level authentication tokens. They authorize access to the gateway itself, not to a specific chain. The permissions
JSONB could eventually include per-chain scoping (e.g., {"rpc_write": [11155111, 84532]}), but that's a future enhancement, not a
blocking requirement.
4. Indexer DB (task table) — Review network mapping (Medium Priority)
Current logic (line 51 of indexer_db.rs):
let network = if chain_id == 1 { "mainnet" } else { "testnet" };
Problem: This binary mapping is wrong for multichain. Chain IDs like 8453 (Base mainnet), 42161 (Arbitrum mainnet), and 137 (Polygon
mainnet) would all be classified as "testnet". The indexer DB's network enum needs to handle all supported chains.
Fix options:
- Use is_mainnet(chain_id) from crates/core/src/common/chain.rs instead of the hardcoded == 1 check
- Or store chain_id as a separate column alongside network for precise filtering
// Replace line 51:
let network = if newton_prover_core::common::chain::is_mainnet(chain_id) {
"mainnet"
} else {
"testnet"
};
5. generate-init-sql.sh — Update after migrations
The GATEWAY_TABLES list and generated SQL need to be regenerated after adding the chain_id columns. Run make generate-init-sql after the
migration is applied.
---
Migration Plan
Migration 1: 20260306100000_add_chain_id_to_policy_client_secret.sql
Migration 2: 20260306100500_add_chain_id_to_encrypted_data_refs.sql
Deployment order:
1. Run migrations (adds columns with DEFAULT 0)
2. Backfill existing rows with the appropriate chain_id
3. Deploy new gateway code that passes chain_id to all queries
4. Regenerate init SQL (make generate-init-sql)
★ Insight ─────────────────────────────────────
- The DEFAULT 0 strategy is safe for zero-downtime deployment: old gateway code (without chain_id) continues to write rows with
chain_id=0, new code writes the real chain_id. A one-time backfill job converts 0 → actual chain_id for historical rows.
- The HPKE AAD already binds encrypted data to (policy_client, chain_id), so even without the DB fix, cross-chain data won't decrypt
successfully. But relying on decryption failure instead of proper DB scoping is a correctness smell — it wastes DB reads and makes
debugging harder.
- policy_client_secret is the highest priority because WASM secrets are fetched on the hot path during policy evaluation. A wrong-chain
secret causes silent policy failures (wrong data decrypted, different policy behavior).
─────────────────────────────────────────────────
---
Summary
┌──────────────────────────────────────┬──────────┬────────────────────────────────────────────────────────┐
│ Change │ Priority │ Blocking? │
├──────────────────────────────────────┼──────────┼────────────────────────────────────────────────────────┤
│ Add chain_id to policy_client_secret │ High │ Yes — wrong secrets silently served │
├──────────────────────────────────────┼──────────┼────────────────────────────────────────────────────────┤
│ Add chain_id to encrypted_data_refs │ High │ Partially mitigated by HPKE AAD, but still needed │
├──────────────────────────────────────┼──────────┼────────────────────────────────────────────────────────┤
│ Fix indexer DB network mapping │ Medium │ Cosmetic for explorer UI but incorrect for L2 mainnets │
├──────────────────────────────────────┼──────────┼────────────────────────────────────────────────────────┤
│ Update generate-init-sql.sh │ Low │ Follow-up after migrations │
├──────────────────────────────────────┼──────────┼────────────────────────────────────────────────────────┤
│ api_keys per-chain scoping │ Future │ Not needed now │
└──────────────────────────────────────┴──────────┴────────────────────────────────────────────────────────┘
Want me to proceed with writing the actual migration files and Rust code changes?
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment