I'm putting a spec of a blueprint that I want built. You should use the spec as a reference but not copy from it as it's code examples might not be accurate. We want to ensure that this blueprint can support an arbitrary blockchain RPC. Users should be able to request the service from this blueprint using different chains by passing in different public RPC docker images.
The firewall should be configured to support different public domains and be used as a public good. It should also be able to be used as a moentizable service where users pay for access to query the RPC with rate-limiting, etc.
- ✅ The Blueprint itself runs as a persistent RPC service (launched in
main.rs). - ✅ Jobs mutate state (e.g., access control, firewall updates, API key issuance).
- ✅ It supports public read-only access (e.g. from polkadot.js.org) via admin-configured allowlists.
- ✅ Compatible with Tangle’s runtime and service model — not just spinning containers, but mounting a live service behind a smart, enforceable job logic system.
secure_rpc_service/
├── secure_rpc_service-bin/
│ └── src/main.rs # Launches the actual RPC proxy service
├── secure_rpc_service-lib/
│ ├── src/
│ │ ├── lib.rs # Job declarations and Blueprint entrypoint
│ │ ├── context.rs # Holds config, state (allowed IPs/accounts), webhook list
│ │ ├── firewall.rs # In-process firewall evaluator + update logic
│ │ ├── rpc.rs # HTTP server proxy that enforces logic before forwarding
│ │ ├── config.rs # Operator-defined whitelist, rate limit, token prices
│ │ └── jobs/
│ │ ├── pay_for_access.rs # Pay-and-allow via EVM or native method
│ │ ├── allow_account.rs # Admin job to permanently allow access
│ │ └── register_webhook.rs # Optional: user adds webhook for access/error events
This launches the service:
use secure_rpc_service_lib::{start_rpc_gateway, SecureRpcBlueprint};
#[tokio::main]
async fn main() -> eyre::Result<()> {
let blueprint = SecureRpcBlueprint::new();
blueprint.register().await?;
start_rpc_gateway(blueprint.context()).await
}This is a persistent async service that:
- Listens on public port (e.g.
8545) - Applies local firewall logic
- Forwards JSON-RPC calls to underlying node if allowed
pub async fn start_rpc_gateway(ctx: Arc<Context>) -> eyre::Result<()> {
let listener = TcpListener::bind(ctx.config.listen_addr).await?;
loop {
let (socket, addr) = listener.accept().await?;
let ctx = ctx.clone();
tokio::spawn(async move {
if ctx.firewall.is_allowed(&addr.ip(), &ctx).await {
proxy_rpc(socket, &ctx).await.unwrap_or_default();
} else {
warn!("Blocked: {}", addr.ip());
}
});
}
}pub struct Firewall {
pub allow_accounts: HashSet<AccountId>,
pub allow_ips: HashSet<IpAddr>,
}
impl Firewall {
pub async fn is_allowed(&self, ip: &IpAddr, ctx: &Context) -> bool {
if self.allow_ips.contains(ip) {
return true;
}
// Optionally: lookup IPs/accounts from an in-RPC auth token
false
}
pub fn add_account(&mut self, account: AccountId) {
self.allow_accounts.insert(account);
}
}- Allow public accounts used by PolkadotJS
- Allow static IPs or CIDRs
- Optional: Token-based or NFT-based gating
#[derive(Blueprint)]
pub struct SecureRpcBlueprint;
impl SecureRpcBlueprint {
pub fn register() -> BlueprintRegistration {
BlueprintRegistration::new()
.job("pay_for_access", jobs::pay_for_access::handler)
.job("allow_account", jobs::allow_account::handler)
.job("register_webhook", jobs::register_webhook::handler)
}
}Admin-controlled job to add a public account:
#[derive(Deserialize, Serialize)]
pub struct AllowAccountJob {
pub account: AccountId,
}
pub async fn handler(ctx: &mut Context, job: AllowAccountJob) -> Result<(), JobError> {
ctx.firewall.add_account(job.account);
Ok(())
}User pays token to get temporary access:
#[derive(Deserialize, Serialize)]
pub struct PayForAccessJob {
pub account: AccountId,
pub duration_secs: u64,
}
pub async fn handler(ctx: &mut Context, job: PayForAccessJob) -> Result<(), JobError> {
// validate token payment using ctx.chain_api or EVMConsumer
ctx.firewall.allow_temp(job.account, job.duration_secs);
Ok(())
}To work seamlessly:
- Open RPC port 8545 (or Substrate-compatible WS port)
- Expose JSON-RPC methods like
state_getStorage,chain_getBlock, etc. - Whitelist known Polkadot.js IPs or accounts (use
allow_accountjob) - Add a landing page for metadata injection (if needed)
[rpc]
listen_addr = "0.0.0.0:8545"
proxy_to = "http://localhost:9933" # Local node
[firewall]
allow_ips = ["127.0.0.1", "1.2.3.4"]
allow_accounts = ["0x1234..."] # PolkadotJS public account(s)| Feature | Job | Notes |
|---|---|---|
| Webhook Notify | register_webhook, emit_webhook |
Trigger on access granted, errors, payments |
| EVM Metering | blueprint_sdk::evm::EvmConsumer |
Charge for job access or usage |
| Metrics | HTTP /metrics endpoint |
Expose Prometheus stats via native producer |
Please build everyting start to finish, do not stop, implement it in one shot. Do not add any TODOs or PLACEHOLDERS or any comments indicating future work. GET IT ALL DONE NO TODOS. NONE AT ALL. DO EVERYTHING. Make it production ready, no testing simulation code allowed. Only production code is allowed. Efficient, concise, production ready service.
The docker images for RPCs as examples to test with can be
docker pull ghcr.io/tangle-network/tangle/tangle:main- Etheruem, Arbitrum, Optimism, etc.
Ok I think the right way is that in our repo we will have the docker images / compose files already working so the user just selects what chain they want RPC for and instead don't need to supply their own image. This way it is secure and the operator doesn't need to worry about running malicious image. PLEASE ACK ON THIS AND GET STARTED.
blueprint.mdc cursor rules - Tangle Blueprint Guide
1. What is a Tangle Blueprint?
A Tangle Blueprint is a modular, job-executing service built on top of Substrate (Tangle) using the Blueprint SDK. It is structured similarly to a microservice with:
These services are composable and deterministic, often containerized (e.g. Docker) and can be tested using the built-in
TangleTestHarness.2. Project Skeleton
The canonical
main.rsstructure looks like:3. Job Composition
Handler Signature
Handlers take a context and deserialized args:
Use
TangleArg,TangleArgs2, etc. for parsing input fields. Always returnTangleResult<T>.Event Filters
Apply
TangleLayerorMatchesServiceIdto jobs to filter execution by service identity.4. Context Composition
Contexts should:
5. Job Naming & IDs
pub const MY_JOB_ID: u64 = 0;snake_case_action_target(e.g.,spawn_indexer_local)jobsmodule, one file per logical task.#[debug_job]macro for helpful traces.6. Testing Blueprints
Use
TangleTestHarnessto simulate a full node and runtime:Testing is composable, isolated, and persistent with
tempfile::TempDir.7. Do's and Don'ts
✅ DO:
BlueprintEnvironmentfor config.TangleLayerfor filtering.data_dirfrom env or use a database.❌ DON'T:
TangleArgextractors.Shared Concepts for All Blueprints
This guide defines the foundational patterns shared across all Blueprint modalities (Tangle, Eigenlayer, Cron, P2P). Follow these to ensure your implementation is idiomatic, composable, and testable.
1. Blueprint Runner Pattern
All Blueprints are launched via
BlueprintRunner::builder(...). This runner:Router.The config passed (e.g.
TangleConfig,EigenlayerBLSConfig) determines how jobs are submitted to the chain—not where events are ingested from.2. Router and Job Routing
Routers map Job IDs to handler functions. Each
.route(ID, handler)must be unique.Use
.layer(...)to apply:TangleLayer(standard substrate filters)FilterLayer::new(MatchesServiceId(...))for multi-tenant service executionFilterLayer::new(MatchesContract(...))to scope EVM jobs by contract addressUse
.with_context(...)to pass your context into jobs.3. Context Pattern
All contexts must:
BlueprintEnvironmentwith#[config]TangleClientContext,ServicesContext,KeystoreContextas neededExample:
Construction should be async:
4. Producer + Consumer Compatibility
Your producer and consumer determine event ingestion and message submission:
TangleProducerPollingProducereth_getLogspollingCronJobRoundBasedAdapterTangleConsumerEVMConsumer🧠 Important: A Blueprint using
TangleConfigmay use EVM producers + consumers. The config determines where results are sent, not where events come from.5. Job Signature Conventions
Use extractors to simplify job argument handling:
TangleArg<T>: one fieldTangleArgs2<A, B>: two fieldsBlockEvents: EVM logsContext<MyContext>: context injectionReturn
TangleResult<T>orResult<(), Error>depending on job type.6. Keystore and Signer Usage
Load from
BlueprintEnvironment:For BLS (Eigenlayer):
7. Naming & Organization
pub const JOB_NAME_ID: u64 = 0;_eigen,_local,_cron, etc.)PascalCaseContextnaming (e.g.,AggregatorContext)jobs/mod.rs,jobs/indexer.rs,jobs/config.rsUse
#[debug_job]macro to log entry and exit automatically.8. Testing Conventions
Use
TangleTestHarnessorAnvil+ Alloy to simulate:setup_services::<N>())submit_job(...))wait_for_job_execution(...))verify_job(...))For Eigenlayer:
castCLI or Anvil statewatch_logssol!macro bindings9. Don'ts
❌ Never use a
TangleConsumer,TangleProduceroutside of a Tangle specific blueprint.Blueprint Networking SDK
This document explains how to use the Blueprint SDK’s networking primitives to integrate libp2p-based peer-to-peer messaging into any Tangle or Eigenlayer Blueprint. It focuses on instantiating the networking layer in production contexts, configuring allowed keys from multiple environments, and composing custom P2P services.
1. Networking Overview
The Blueprint SDK supports P2P communication via:
NetworkService— manages the network lifecycleNetworkServiceHandle— used in jobs/contexts to send/receive messagesNetworkConfig— initializes node identity, protocol name, allowed keysAllowedKeys— limits which nodes can connectThe networking stack is libp2p-native and works in Tangle, Eigenlayer, or custom Blueprint deployments.
2. Integrating Networking into a Context
Context Layout
Context Constructor
3. Computing Allowed Keys
✅ From Tangle
✅ From Eigenlayer AVS
4. Sending and Receiving Messages
Sending
Receiving
Use
bincodeor similar for message serialization.5. Notes on Identity
NetworkConfigcomes from theinstance_key_pairfieldInstanceMsgPublicKeymust match one used in theAllowedKeysSpEcdsa,ArkBlsBn254, others viaKeyTypetrait6. Best Practices
✅ DO:
/app/version/...)❌ DON’T:
7. Use Cases
For round-based coordination, see the
round-based.mddoc.Round-Based Protocols with Blueprint SDK
This guide describes how to design and execute round-based multiparty protocols using the
round_basedcrate and Blueprint SDK’sRoundBasedNetworkAdapter. These protocols are ideal for DKG, randomness generation, keygen, signing, or any interactive consensus.1. Key Concepts
Serialize,Deserialize)2. Define Protocol Messages
3. Set Up the Router
4. Send and Receive
You may access indexed results and verify per party.
5. Connect to Network
You now have
incomingandoutgoingchannels to wire into your protocol.6. Simulating the Protocol
For local dev:
7. Production Pattern
Use the adapter in a background task or job with:
RoundBasedNetworkAdapterInstanceMsgPublicKeys8. Blame Tracking
To identify misbehavior:
If
commit != sha256(decommit), blame the peer and continue protocol.9. Error Handling
Use rich error types to pinpoint issues:
10. Use Cases
Use this guide to scaffold secure, blame-attributing, peer-verifiable round-based protocols.
Solidity Blueprint contract
You can override these base methods to implement all things related to the onchain functionality of the Blueprint dealing with job requests, service creation, approvals, rejections, job calls, job result submissions (where we verify jobs)