Source: https://deepwiki.com/tangle-network/blueprint Generated: 2025-06-09T00:21:27.280308 Command Type: deep_crawl Original Size: 573.53 KB (587,184 chars) Summary Size: 410.53 KB (420,163 chars) Compression: 28.4% reduction (1.4:1 ratio)
The Blueprint framework provides a comprehensive set of tools for building decentralized applications across multiple blockchain networks. Key advantages include:
- Protocol Agnostic: Write once, deploy to multiple blockchain environments.
- Modular Design: Use only the components you need.
- Extensible: Add custom functionality through middleware and extensions.
- Secure: Strong cryptographic foundations and key management.
- Networked: Built-in P2P networking capabilities.
For more detailed information about specific aspects of the framework, see the related documentation pages on:
- Framework Architecture
- SDK Components
- Job and Router System
- Blueprint Manager and Runner
- Protocol Support
- Networking and Cryptography
To start using the Blueprint framework, you first need to install the cargo-tangle CLI tool:
curl --proto '=https' --tlsv1.2 -LsSf https://github.com/tangle-network/gadget/releases/download/cargo-tangle/v0.1.1-beta.7/cargo-tangle-installer.sh | sh
or install from source:
cargo install cargo-tangle --git https://github.com/tangle-network/gadget --force
Once installed, you can create your first blueprint:
# Create a new blueprint named "my_blueprint"
cargo tangle blueprint create --name my_blueprint
# Navigate into the blueprint directory and build
cd my_blueprint
cargo build
The Blueprint framework is a comprehensive toolkit for building, deploying, and managing decentralized applications (dApps) across multiple blockchain environments. It supports multiple blockchain protocols, including Tangle Network, Eigenlayer, and EVM-compatible chains.
Blueprint's architecture is built around a modular, extensible core system with specialized components for different blockchain environments.
- Protocol Integrations
- cargo-tangle CLI
- Blueprint SDK
- Blueprint Runner
- Blueprint Manager
- Core Components
- Cryptography
- Blockchain Clients
- Networking
- Job Router
- Keystore
- Blueprint Sources
- Tangle Network
- EVM Clients
- Eigenlayer
- CLI (
cargo-tangle): Command-line interface for creating, managing, and deploying blueprints. - Blueprint SDK: Core toolkit with various components for different functionalities.
- Blueprint Runner: Executes blueprint operations in a protocol-specific manner.
- Blueprint Manager: Orchestrates blueprint lifecycle, handling events and sources.
- Monitors events from blockchain networks.
- Manages the lifecycle of blueprint services.
- Fetches and spawns blueprints based on events.
- Configures the execution environment.
- Coordinates job execution.
- Provides protocol-specific runtime capabilities.
The Blueprint framework supports multiple blockchain protocols through specialized client implementations and protocol-specific extensions.
- Deployment
- Protocol Clients
- Deploy Tangle
- Deploy EVM
- Deploy Eigenlayer
- Protocol Extensions
- blueprint-tangle-extra
- blueprint-evm-extra
- blueprint-eigenlayer-extra
- blueprint-clients
- blueprint-client-tangle
- blueprint-client-evm
- blueprint-client-eigenlayer
- cargo-tangle CLI
- Core: Fundamental abstractions for the job system and blueprint building blocks.
- Router: Directs job requests to appropriate handlers.
- Crypto: Multi-chain cryptographic primitives for various key types and signature schemes.
- Clients: Protocol-specific clients for blockchain interactions.
- Networking: P2P networking capabilities based on libp2p.
- Keystore: Secure key management for multiple key types.
The job system provides a unified way to handle various tasks across different protocols.
Job Execution Flow
Route Match
No Match
Always Route
JobCall (with JobId)
Router
Job Handler
Fallback Handler
Always Handler
JobResult
The Blueprint SDK is highly modular, consisting of multiple crates that can be used independently:
Blueprint SDK
blueprint-core
blueprint-router
blueprint-runner
blueprint-crypto
blueprint-keystore
blueprint-clients
blueprint-networking
blueprint-stores
crypto-core
crypto-k256
crypto-sr25519
crypto-ed25519
crypto-bls
crypto-bn254
client-core
client-tangle
client-evm
client-eigenlayer
- JobCall: A request to execute a specific job with associated data
- JobId: A unique identifier for jobs
- Router: Examines incoming JobCalls and directs them to appropriate handlers
- Handlers: Functions that execute the logic for specific jobs
- JobResult: The result returned after job execution
The Blueprint Manager and Runner handle the lifecycle of blueprints from deployment to execution.
Event Flow
Blueprint Runner
Source Handlers
Blueprint Manager
Configure
Fetch & Spawn
Execute
Event Handler
Blueprint Manager
Blueprint Source Handler
GitHub Source
Container Source
Test Source
Blueprint Runner Builder
Finalized Runner
Blockchain Events
Job Execution
The Blueprint framework provides robust networking and cryptographic capabilities to support secure, distributed applications.
- Keystore: Secure key storage with multiple backends (file-based, in-memory, remote).
- Local Signer
- Remote Signer
- Hardware Signer
Supports multiple signature schemes and key types:
- K256 (secp256k1): Used for Ethereum and other EVM chains.
- SR25519: Used for Tangle and other Substrate-based blockchains.
- ED25519: General-purpose EdDSA implementation.
- BLS: Boneh-Lynn-Shacham signatures for aggregated signing.
- BN254: Barreto-Naehrig curves for zero-knowledge proofs.
Built on libp2p, providing:
- Peer-to-peer communication: Direct communication between nodes.
- Discovery: Finding and connecting to peers.
- Extensions: Support for specialized protocols like aggregated signatures and round-based protocols.
- Tangle Network: Native support for the Tangle blockchain.
- EVM-compatible chains: Support for Ethereum and other EVM chains.
- Eigenlayer: Support for Eigenlayer's consensus mechanisms.
CLI commands for deploying to that environment.
cargo tangle blueprint deploy --rpc-url wss://rpc.tangle.tools --package my_blueprintThe Blueprint SDK is a comprehensive framework for building, deploying, and managing decentralized applications across multiple blockchain environments. It provides a modular, protocol-agnostic foundation that enables developers to create services that can run on Tangle Network, Eigenlayer, and EVM-compatible blockchains.
- Overview
- Architecture Overview
- Key Concepts
- Protocol Support
- Installation
- Creating Your First Blueprint
- Example Blueprints
- Blueprint SDK
- Core Components
- Job System
- Router
- Networking
- Keystore
- Blueprint Runner
- Runner Configuration
- Job Execution Flow
- Blueprint Manager
- Event Handling
- Blueprint Sources
- CLI Reference
- Blueprint Commands
- Key Management Commands
- Deployment Options
- Development
- Build Environment
- Testing Framework
- CI/CD
- Advanced Topics
- Networking Extensions
- Macro System
- Custom Protocol Integration
To use the Blueprint SDK in your project, add it to your Cargo.toml:
[dependencies]
blueprint-sdk = { version = "0.1.0-alpha.7", features = ["std", "tracing"] }You can enable additional features based on your requirements:
[dependencies]
blueprint-sdk = { version = "0.1.0-alpha.7", features = ["std", "tracing", "tangle", "networking"] }A blueprint is built using the job system provided by the SDK. Here's a simplified example of how to create and configure a blueprint:
- Define your job handlers
- Configure a router to map job IDs to handlers
- Create a runner to execute the jobs
Blueprint Creation Flow:
- Define Job Handlers
- Configure Router
- Create Runner
- Build Runner
- Use Runner for Job Execution
- Cargo.lock
- Cargo.toml
- cli/Cargo.toml
- clients/eigenlayer/Cargo.toml
- clients/evm/Cargo.toml
- clients/tangle/Cargo.toml
- contexts/Cargo.toml
- manager/Cargo.toml
- sdk/Cargo.toml
- testing-utils/anvil/Cargo.toml
- testing-utils/core/Cargo.toml
- testing-utils/eigenlayer/Cargo.toml
- testing-utils/tangle/Cargo.toml
- blueprint-sdk
- blueprint-core
- blueprint-router
- blueprint-runner
- blueprint-crypto
- blueprint-keystore
- blueprint-clients
- blueprint-networking
- blueprint-contexts
- blueprint-stores
- blueprint-chain-setup
The SDK is built around several core components that form the foundation of the blueprint system:
The blueprint-core crate provides the fundamental abstractions and types for the entire system, including the Job system.
The blueprint-router handles the routing of jobs to appropriate handlers. It supports:
- Exact matches
- Fallback routes
- "Always" routes that execute for every job call
Match found
No match found
Always executes
router.route(job_id, handler)
router.fallback(handler)
router.always(handler)
router.layer(middleware)
JobCall(job_id, payload)
Router
Job Handler
Fallback Handler
Always Handler
JobResult
The blueprint-runner is responsible for executing blueprints and managing their lifecycle. It provides the execution environment for the jobs and routes them to the appropriate handlers.
The blueprint-clients crate provides protocol-specific client implementations for interacting with different blockchain networks. It supports:
- Tangle Network via
blueprint-client-tangle - EVM-compatible chains via
blueprint-client-evm - Eigenlayer via
blueprint-client-eigenlayer
Client Architecture
blueprint-client-core
blueprint-client-tangle
blueprint-client-evm
blueprint-client-eigenlayer
tangle-subxt
alloy libraries
eigensdk
The blueprint-networking crate provides peer-to-peer networking capabilities built on libp2p, including extensions for different networking protocols such as round-based protocols and aggregated signature gossip.
The blueprint-contexts crate provides context providers for:
- Tangle Network
- EVM-compatible chains
- Eigenlayer
- Networking
- Keystore
The Blueprint SDK employs a feature-flag system for enabling/disabling components based on application needs:
std: Enables standard library supportweb: Enables support for web targetstracing: Enables tracing support for debugging and monitoring
The blueprint-crypto crate provides cryptographic utilities for key generation, signing, and verification. It supports multiple cryptographic primitives:
- K256 (secp256k1) via
blueprint-crypto-k256 - SR25519 via
blueprint-crypto-sr25519 - ED25519 via
blueprint-crypto-ed25519 - BLS via
blueprint-crypto-bls - BN254 via
blueprint-crypto-bn254
The blueprint-keystore manages cryptographic keys, providing secure storage and access to keys for different cryptographic schemes and protocols.
tangle: Enables Tangle Network supportevm: Enables EVM supporteigenlayer: Enables Eigenlayer support
networking: Enables peer-to-peer networking supportround-based-compat: Enables round-based protocol extensions
local-store: Enables local key-value storesmacros: Enables all macros from subcratesbuild: Enables build-time utilitiestesting: Enables testing utilitiescronjob: Enables cron job producer
eigenlayer
evm
testing
std
build
macros
tangle
blueprint-clients/tangle
blueprint-contexts/tangle
blueprint-runner/tangle
blueprint-clients/evm
blueprint-contexts/evm
networking
blueprint-contexts/networking
blueprint-runner/networking
blueprint-networking
To interact with the Tangle Network:
blueprint-sdk = { version = "0.1.0-alpha.7", features = ["tangle"] }Enable the evm feature:
blueprint-sdk = { version = "0.1.0-alpha.7", features = ["evm"] }Enable the eigenlayer feature:
blueprint-sdk = { version = "0.1.0-alpha.7", features = ["eigenlayer"] }Enable the testing feature to access testing utilities:
blueprint-sdk = { version = "0.1.0-alpha.7", features = ["testing"] }The job system is the core abstraction in the Blueprint SDK. Jobs are identified by a JobId and can carry arbitrary payload data. The router maps these jobs to handlers, which process the jobs and return results.
A job handler can be any function or closure that takes a specific input type and returns a JobResult:
fn my_handler(input: MyInputType) -> JobResult<MyOutputType>The router is configured by registering handlers for specific job IDs:
let mut router = Router::new();
router.route("my_job_id", my_handler);
router.fallback(fallback_handler);
router.always(always_handler);The Blueprint SDK provides seamless integration with different blockchain protocols through its client implementations.
Enable the tangle feature to access Tangle Network-specific functionality:
blueprint-sdk = { version = "0.1.0-alpha.7", features = ["tangle"] }This provides access to:
blueprint-testing-utils: General testing utilitiesblueprint-core-testing-utils: Core testing primitivesblueprint-anvil-testing-utils: Utilities for testing with Anvil (Ethereum)blueprint-tangle-testing-utils: Utilities for testing with Tangle Networkblueprint-eigenlayer-testing-utils: Utilities for testing with Eigenlayer
The blueprint-chain-setup crate provides utilities for setting up and configuring different blockchain environments for development and testing.
The Blueprint SDK includes extensions for advanced networking patterns:
blueprint-networking-round-based-extension: Support for round-based protocolsblueprint-networking-agg-sig-gossip-extension: Support for aggregated signature gossip
The blueprint-stores crate provides key-value storage capabilities for blueprints. Enable the local-store feature to use local storage backends.
The Blueprint SDK provides a comprehensive, modular framework for building decentralized applications across multiple blockchain environments. Its flexible architecture allows developers to include only the components they need, while providing a consistent programming model regardless of the underlying blockchain protocol.
For more detailed information about specific components of the SDK, refer to the following pages:
No significant content available.
No significant content available.
- Overview: Overview
- Architecture Overview: Architecture Overview
- Key Concepts: Key Concepts
- Protocol Support: Protocol Support
- Installation: Installation
- Creating Your First Blueprint: Creating Your First Blueprint
- Example Blueprints: Example Blueprints
- Blueprint SDK: Blueprint SDK
- Core Components: Core Components
- Job System: Job System
- Router: Router
- Networking: Networking
- Keystore: Keystore
- Blueprint Runner: Blueprint Runner
- Runner Configuration: Runner Configuration
- Job Execution Flow: Job Execution Flow
- Blueprint Manager: Blueprint Manager
- Event Handling: Event Handling
- Blueprint Sources: Blueprint Sources
- CLI Reference: CLI Reference
- Blueprint Commands: Blueprint Commands
- Key Management Commands: Key Management Commands
- Deployment Options: Deployment Options
- Development: Development
- Build Environment: Build Environment
- Testing Framework: Testing Framework
- CI/CD: CI/CD
- Advanced Topics: Advanced Topics
- Networking Extensions: Networking Extensions
- Macro System: Macro System
- Custom Protocol Integration: Custom Protocol Integration
You can install the CLI using the installation script (recommended):
curl --proto '=https' --tlsv1.2 -LsSf https://github.com/tangle-network/gadget/releases/download/cargo-tangle/v0.1.1-beta.7/cargo-tangle-installer.sh | shAlternatively, you can install from source:
cargo install cargo-tangle --git https://github.com/tangle-network/gadget --forceThe Tangle CLI provides a straightforward way to create a new blueprint project. The basic command is:
cargo tangle blueprint create --name <blueprint_name>Before creating your first blueprint, ensure you have the following installed:
- Rust and Cargo
- OpenSSL development packages
- The Tangle CLI (
cargo-tangle)
sudo apt update && sudo apt install build-essential cmake libssl-dev pkg-configbrew install openssl cmakeThis generates a new blueprint project with the specified name using the default template. The command creates a directory with your blueprint name, initializes a Git repository, and sets up the basic structure for your project.
The CLI supports different blueprint types:
- Tangle Blueprint (default) - For building services on Tangle Network
- Eigenlayer BLS Blueprint - For building BLS-based services on Eigenlayer
- Eigenlayer ECDSA Blueprint - For building ECDSA-based services on Eigenlayer
You can specify a blueprint type using the appropriate flag. The CLI will guide you through the process.
- README.md
- cli/README.md
- create/mod.rs
- keys.rs
- eigenlayer.rs
- forge.rs
- main.rs
- util.rs
- Cargo.toml
- config.rs
- bls.rs
- config.rs
- ecdsa.rs
- error.rs
- error.rs
- lib.rs
- config.rs
- error.rs
- Cargo.toml
- Cargo.lock
- Cargo.toml
When you create a new blueprint, you'll get a project with a structure similar to the following:
Your Blueprint Project
├── Cargo.toml (Project Config)
├── src/ (Source Code)
│ ├── main.rs (Entry Point)
│ ├── jobs.rs (Job Definitions)
│ └── lib.rs (Blueprint Library)
└── tests/ (Test Files)
-
Create a new blueprint:
cargo tangle blueprint create- Specify name (
--name) - Specify blueprint type (optional)
- Specify template source (optional)
- Generate from template
- Specify name (
-
Build the project:
cargo build -
Test the project:
cargo test -
Deploy the blueprint:
cargo tangle blueprint deploy- Register blueprint (optional)
- Run blueprint service
- Cargo.toml - Project configuration and dependencies
- src/main.rs - The entry point for your blueprint service
- src/lib.rs - Core library code for your blueprint
- src/jobs.rs - Definitions for jobs that your blueprint can process
- tests/ - Unit and integration tests for your blueprint
To build your blueprint project, use the following commands:
cd my_blueprint
cargo buildFor production deployment, build with the release profile:
cargo build --release- Deploy to Tangle Network
- Deploy to Eigenlayer
- Set Tangle-specific options
- Set Eigenlayer-specific options
- Generate Blueprint ID
- Deploy Smart Contracts
- Register as Operator (optional)
- Register as AVS Operator
- Run Blueprint Service on Tangle
- Run Blueprint Service on Eigenlayer
To ensure your blueprint works correctly, run the tests:
cargo test
After building your blueprint, you can deploy it to the target network. The Tangle CLI provides deployment commands for different networks:
cargo tangle blueprint deploy tangle --ws-rpc-url <WS_URL> --keystore-path <KEYSTORE_PATH> --package <PACKAGE_NAME>
For local development and testing, you can use the --devnet flag to automatically start a local testnet:
cargo tangle blueprint deploy tangle --devnet --package <PACKAGE_NAME>
When running your blueprint, you'll need to provide environment configuration. The deployment command will help set this up, but understanding the configuration elements is important:
| Configuration Element | Description | Default Value |
|---|---|---|
| HTTP RPC URL | HTTP endpoint of the target blockchain | http://127.0.0.1:9944 |
| WebSocket RPC URL | WebSocket endpoint of the target blockchain | ws://127.0.0.1:9944 |
| Keystore Path | Path to the keystore containing your keys | ./keystore |
| Protocol | Target protocol (Tangle, Eigenlayer) | tangle |
| Blueprint ID | ID of the deployed blueprint (Tangle only) | (Required for running) |
| Service ID | ID of the service instance (Tangle only) | (Required for running) |
The BlueprintEnvironment holds these configuration values and is passed to the blueprint runner.
- Creates, builds with, produces, runs with, uses, manages, sends results to, routes, process, produce.
- Cargo-tangle CLI:
cargo tangle blueprint run --protocol tangle --rpc-url http://127.0.0.1:9944 --keystore-path ./keystore
-
Job System Architecture:
- Produces → Routed to → Dispatches to → Processes and returns → Sent to
- Event Producer → JobCall (with JobId) → Router → Job Handler Function → JobResult → Job Consumer
-
Blueprint Definition:
- Job Producers: Sources of events that create job calls.
- Job Handlers: Functions that process job calls.
- Job Router: Routes job calls to appropriate handlers.
- Job Consumers: Destinations for job results.
-
Blueprint Runner: Orchestrates the flow, ensuring jobs are routed and results distributed.
- Adding new jobs to handle different events or tasks.
- Implementing custom job producers for specific needs.
- Extending your blueprint with networking capabilities.
- Integrating with on-chain logic for complex use cases.
For more detailed information on these topics, refer to the Blueprint SDK documentation. To learn about example blueprints that showcase different capabilities, see Example Blueprints. Sources: 14-21 30-118.
- Ask Devin about tangle-network/blueprint
- Deep Research
No significant content available.
// Create a simple in-memory keystore
let config = KeystoreConfig::new().in_memory(true);
let keystore = Keystore::new(config)?;
// Or with file storage
let config = KeystoreConfig::new().fs_root("path/to/keystore");
let keystore = Keystore::new(config)?;Before installing the Tangle Blueprint framework, ensure your system has the following dependencies:
- Rust (version 1.86 or later)
- OpenSSL development packages
- Basic build tools (compiler, linker)
Platform Prerequisites
apt install
brew install
Requires
Ubuntu/Debian
build-essential, cmake, libssl-dev, pkg-config
macOS
openssl, cmake
Windows
Windows Subsystem for Linux (WSL2)
- nextest.toml
- README.md
- cli/README.md
- create/mod.rs
- keys.rs
- eigenlayer.rs
- error.rs
- config.rs
- mod.rs
- mod.rs
- substrate.rs
- aggregator_selection.rs
- flake.lock
- flake.nix
- rust-toolchain.toml
sudo apt update && sudo apt install build-essential cmake libssl-dev pkg-configbrew install openssl cmakeInstall Windows Subsystem for Linux (WSL2) and follow the Ubuntu/Debian instructions.
The cargo-tangle CLI is the primary tool for interacting with the Blueprint framework.
curl --proto '=https' --tlsv1.2 -LsSf https://github.com/tangle-network/gadget/releases/download/cargo-tangle/v0.1.1-beta.7/cargo-tangle-installer.sh | shcargo install cargo-tangle --git https://github.com/tangle-network/blueprint --forceThe Blueprint framework requires specific Rust components and versions to function properly.
blueprint createblueprint deploygenerate-keyslist-blueprintsblueprint run
- Rust Version: 1.86
- Required Components:
cargorustfmtclippyrust-src
[toolchain]
channel = "1.86"
components = ["cargo", "rustfmt", "clippy", "rust-src"]
profile = "minimal"To use the Nix development environment:
- Install Nix with flakes support.
- Run:
nix developThis will provide a shell with all necessary tools and dependencies installed.
The Blueprint framework uses a flexible keystore system for managing cryptographic keys. Several backend options are available:
- Keystore Storage Options:
InMemoryStorage(no persistence)FileStorage(file system path)SubstrateStorage(Substrate LocalKeystore)
| Key Type | Description | Usage |
|---|---|---|
| SR25519 | Schnorrkel/Ristretto x25519 | Substrate/Tangle signatures |
| ED25519 | Edwards-curve Digital Signature Algorithm | General purpose signatures |
| ECDSA | Elliptic Curve Digital Signature Algorithm | Ethereum/EVM signatures |
| BLS | Boneh–Lynn–Shacham signatures | Aggregate signatures for Eigenlayer |
| BN254 | Barreto-Naehrig curve | Zero-knowledge proofs |
To generate a new key pair:
cargo tangle blueprint generate-keys -k <KEY_TYPE> -p <PATH> -s <SURI/SEED> --show-secret
Where:
<KEY_TYPE>:sr25519,ecdsa,bls_bn254,ed25519,bls381<PATH>: Directory to store the key (optional)<SURI/SEED>: Seed phrase or string (optional)--show-secret: Displays the private key (optional)
Example:
cargo tangle blueprint generate-keys -k sr25519 -p ./my-keystore --show-secretTo verify that the installation was successful:
cargo tangle --versionYou should see the version number of the installed CLI.
To verify that all components are working correctly, create a simple test blueprint:
cargo tangle blueprint create --name test_blueprint
cargo buildIf the build succeeds, your installation is working properly.
If you encounter errors about missing libraries:
- For OpenSSL issues: ensure you have
libssl-dev(Ubuntu/Debian) oropenssl(macOS) installed - For build tools: install
build-essential(Ubuntu/Debian) or Xcode command line tools (macOS)
If you encounter keystore-related errors:
- Check file permissions on the keystore directory
- Ensure the path exists and is writable
- For permission errors, try running with elevated privileges or adjusting directory permissions
Sources: 1-159
This diagram illustrates the complete installation architecture and how various components of the Blueprint framework interact:
## Advanced Topics
- Ask Devin about tangle-network/blueprint
- Deep Research
## https://deepwiki.com/tangle-network/blueprint/1.1-architecture-overview
## Documentation
## Introduction to Tangle Network's Blueprint Framework
### Tangle Network's Blueprint Framework Overview
- **Overview**: [Overview](https://deepwiki.com/tangle-network/blueprint/1-overview)
- **Architecture Overview**: [Architecture Overview](https://deepwiki.com/tangle-network/blueprint/1.1-architecture-overview)
- **Key Concepts**: [Key Concepts](https://deepwiki.com/tangle-network/blueprint/1.2-key-concepts)
- **Protocol Support**: [Protocol Support](https://deepwiki.com/tangle-network/blueprint/1.3-protocol-support)
### Getting Started
- **Installation**: [Installation](https://deepwiki.com/tangle-network/blueprint/2.1-installation)
- **Creating Your First Blueprint**: [Creating Your First Blueprint](https://deepwiki.com/tangle-network/blueprint/2.2-creating-your-first-blueprint)
- **Example Blueprints**: [Example Blueprints](https://deepwiki.com/tangle-network/blueprint/2.3-example-blueprints)
### Blueprint SDK
- **Core Components**: [Core Components](https://deepwiki.com/tangle-network/blueprint/3.1-core-components)
- **Job System**: [Job System](https://deepwiki.com/tangle-network/blueprint/3.2-job-system)
- **Router**: [Router](https://deepwiki.com/tangle-network/blueprint/3.3-router)
- **Networking**: [Networking](https://deepwiki.com/tangle-network/blueprint/3.4-networking)
- **Keystore**: [Keystore](https://deepwiki.com/tangle-network/blueprint/3.5-keystore)
### Blueprint Runner
- **Runner Configuration**: [Runner Configuration](https://deepwiki.com/tangle-network/blueprint/4.1-runner-configuration)
- **Job Execution Flow**: [Job Execution Flow](https://deepwiki.com/tangle-network/blueprint/4.2-job-execution-flow)
### Blueprint Manager
- **Event Handling**: [Event Handling](https://deepwiki.com/tangle-network/blueprint/5.1-event-handling)
- **Blueprint Sources**: [Blueprint Sources](https://deepwiki.com/tangle-network/blueprint/5.2-blueprint-sources)
### CLI Reference
- **Blueprint Commands**: [Blueprint Commands](https://deepwiki.com/tangle-network/blueprint/6.1-blueprint-commands)
- **Key Management Commands**: [Key Management Commands](https://deepwiki.com/tangle-network/blueprint/6.2-key-management-commands)
- **Deployment Options**: [Deployment Options](https://deepwiki.com/tangle-network/blueprint/6.3-deployment-options)
### Development
- **Build Environment**: [Build Environment](https://deepwiki.com/tangle-network/blueprint/7.1-build-environment)
- **Testing Framework**: [Testing Framework](https://deepwiki.com/tangle-network/blueprint/7.2-testing-framework)
- **CI/CD**: [CI/CD](https://deepwiki.com/tangle-network/blueprint/7.3-cicd)
### Advanced Topics
- **Networking Extensions**: [Networking Extensions](https://deepwiki.com/tangle-network/blueprint/8.1-networking-extensions)
- **Macro System**: [Macro System](https://deepwiki.com/tangle-network/blueprint/8.2-macro-system)
- **Custom Protocol Integration**: [Custom Protocol Integration](https://deepwiki.com/tangle-network/blueprint/8.3-custom-protocol-integration)
## Getting Started
No significant content to extract.
## Architecture Overview
# Architecture Overview
## Relevant Source Files
- [Cargo.lock](https://github.com/tangle-network/blueprint/blob/af8278cb/Cargo.lock)
- [Cargo.toml](https://github.com/tangle-network/blueprint/blob/af8278cb/Cargo.toml)
- [README.md](https://github.com/tangle-network/blueprint/blob/af8278cb/README.md)
- [CLI Cargo.toml](https://github.com/tangle-network/blueprint/blob/af8278cb/cli/Cargo.toml)
- [CLI README.md](https://github.com/tangle-network/blueprint/blob/af8278cb/cli/README.md)
- [Create Command](https://github.com/tangle-network/blueprint/blob/af8278cb/cli/src/command/create/mod.rs)
- [Keys Command](https://github.com/tangle-network/blueprint/blob/af8278cb/cli/src/command/keys.rs)
- [Eigenlayer Command](https://github.com/tangle-network/blueprint/blob/af8278cb/cli/src/command/run/eigenlayer.rs)
- [Client Cargo.toml (Eigenlayer)](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/clients/eigenlayer/Cargo.toml)
- [Client Cargo.toml (EVM)](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/clients/evm/Cargo.toml)
- [Client Cargo.toml (Tangle)](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/clients/tangle/Cargo.toml)
- [Contexts Cargo.toml](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/contexts/Cargo.toml)
- [Manager Cargo.toml](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/manager/Cargo.toml)
- [SDK Cargo.toml](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/sdk/Cargo.toml)
- [Testing Utils (Anvil)](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/testing-utils/anvil/Cargo.toml)
- [Testing Utils (Core)](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/testing-utils/core/Cargo.toml)
- [Testing Utils (Eigenlayer)](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/testing-utils/eigenlayer/Cargo.toml)
- [Testing Utils (Tangle)](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/testing-utils/tangle/Cargo.toml)
## System Overview
The Blueprint framework features a modular architecture for developing decentralized applications across Tangle Network, Eigenlayer, and EVM-compatible blockchains. Key components include:
- **Job System**: Defines and routes units of work.
- **Protocol Integrations**: Supports Tangle Network, EVM-compatible blockchains, and Eigenlayer.
- **Networking Layer**: Facilitates peer-to-peer communication.
- **Keystore**: Manages secure key storage.
- **Manager**: Handles the lifecycle of blueprints.
- **CLI**: Provides user interaction capabilities.
This architecture allows developers to create complex decentralized applications without needing to manage blockchain-specific implementation details.
## Blueprint SDK Architecture
## Blueprint SDK Architecture
The Blueprint SDK is the core of the framework, providing a comprehensive set of tools and libraries for blueprint development. It follows a highly modular design, allowing developers to include only the components they need for their specific use case.
### Key Components
- **Blockchain Clients**
- `blueprint-core`
- `blueprint-router`
- `blueprint-runner`
- `blueprint-clients`
- `blueprint-crypto`
- `blueprint-networking`
- `blueprint-keystore`
- `blueprint-contexts`
- `blueprint-stores`
- **Protocol Extras**
- `blueprint-tangle-extra`
- `blueprint-evm-extra`
- `blueprint-eigenlayer-extra`
- `blueprint-crypto-core`
- `blueprint-crypto-k256`
- `blueprint-crypto-sr25519`
- `blueprint-crypto-ed25519`
- `blueprint-crypto-bls`
- `blueprint-crypto-bn254`
- `blueprint-client-core`
- `blueprint-client-tangle`
- `blueprint-client-evm`
- `blueprint-client-eigenlayer`
Sources: [15-26](https://github.com/tangle-network/blueprint/blob/af8278cb/README.md#L15-L26) [40-119](https://github.com/tangle-network/blueprint/blob/af8278cb/Cargo.toml#L40-L119) [15-35](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/sdk/Cargo.toml#L15-L35) [28-73](https://github.com/tangle-network/blueprint/blob/af8278cb/README.md#L28-L73)
## Job System Architecture
## Job System Architecture
The job system in the Blueprint framework consists of three main components:
1. **Jobs**: Encapsulated units of work.
2. **Router**: Routes job calls to appropriate handlers.
3. **Runner**: Executes jobs in a protocol-specific manner.
The router examines the `JobId` of incoming `JobCall` objects and directs them to the appropriate handler. It supports three types of routes:
1. **Exact matches**: Routes that handle specific job IDs.
2. **Fallback routes**: Handle unmatched job calls.
3. **Always routes**: Execute for every job call.
Sources: [Cargo.toml (core)](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/core/Cargo.toml) [Cargo.toml (router)](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/router/Cargo.toml) [Cargo.toml (runner)](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/runner/Cargo.toml)
## Blueprint Manager
## Manager Architecture
The Blueprint Manager monitors events from the Tangle network and manages the lifecycle of blueprint services. It handles fetching, spawning, and executing blueprints in response to network events.
Event Flow Source Handlers Blueprint Manager Monitor Trigger Notify Fetch Blueprint Spawn Blueprint Manager Event Monitor Source Handler GitHub Source Container Source Local Source Network Events Blueprint Runner
## Protocol Integration Architecture
The Blueprint framework integrates with multiple blockchain protocols through specialized client implementations, allowing developers to build applications that interact with different blockchains.
## Runner Adapters and Protocol Libraries
### Runner Adapters and Protocol Libraries
- **Tangle Client**: Interacts with the Tangle Network using the `tangle-subxt` library.
- **EVM Client**: Connects to EVM-compatible blockchains using Alloy libraries.
- **Eigenlayer Client**: Interfaces with Eigenlayer services using the `eigensdk` library.
#### Libraries
- `blueprint-client-core`
- `blueprint-client-tangle`
- `blueprint-client-evm`
- `blueprint-client-eigenlayer`
- `tangle-subxt`
- `alloy libraries`
- `eigensdk`
#### Adapters
- Tangle Adapter
- EVM Adapter
- Eigenlayer Adapter
Sources:
- [Tangle Client Cargo.toml](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/clients/tangle/Cargo.toml#L1-L48)
- [EVM Client Cargo.toml](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/clients/evm/Cargo.toml#L1-L64)
- [Eigenlayer Client Cargo.toml](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/clients/eigenlayer/Cargo.toml#L1-L66)
## Networking Architecture
## Networking Architecture
The Blueprint framework includes a robust networking layer built on libp2p, providing peer-to-peer communication capabilities for decentralized applications.
libp2p Components
- Behaviours
- Networking Layer
- Extensions
- Network Extensions
- Aggregated Signature Gossip
- Round-Based Protocol
- Network Service
- Gadget Behaviour
- Peer Manager
- Discovery
- Blueprint Protocol
- Ping
- Gossipsub
- Kademlia DHT
- MDNS Discovery
The networking layer includes extensions for specialized use cases:
- **Aggregated Signature Gossip**: For efficient signature aggregation in consensus protocols
- **Round-Based Protocol**: Compatibility layer for round-based MPC protocols
Sources: [GitHub Repository](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/networking/Cargo.toml)
## Keystore Architecture
## Keystore Architecture
The Blueprint framework includes a flexible keystore implementation for secure key management, supporting various key types and storage backends.
### Key Types
- **SR25519**: Used primarily for Tangle/Substrate-based chains
- **ED25519**: General-purpose EdDSA signatures
- **ECDSA**: Used for EVM-compatible blockchains
- **BLS**: Used for threshold signatures and signature aggregation
- **BN254**: Used for Eigenlayer's BLS verification
### Storage Backends
- **In-Memory Storage**
- **File Storage**
- **Remote Storage**
### CLI Architecture
The `cargo-tangle` CLI provides a user-friendly interface for interacting with the Blueprint framework, with commands for creating, deploying, and managing blueprints.
### Key Management CLI
A key management CLI is available for generating, importing, and exporting keys.
Sources: [1-321](https://github.com/tangle-network/blueprint/blob/af8278cb/cli/src/command/keys.rs#L1-L321) [69-77](https://github.com/tangle-network/blueprint/blob/af8278cb/Cargo.toml#L69-L77)
## CLI Command Structure
The CLI includes commands for various operations:
- **Blueprint Commands**: Creating, deploying, and running blueprints
- **Key Management Commands**: Generating, importing, and exporting cryptographic keys
### Commands Overview
- **Create a Blueprint**: Generates a new project from a template.
- **Deploy a Blueprint**: Uploads it to the selected blockchain platform (Tangle Network, Eigenlayer, etc.).
### Code Snippet Example
```bash
# Example commands for CLI
cargo-tangle create <blueprint_name>
cargo-tangle deploy <blueprint_name>
cargo-tangle run <blueprint_name>
cargo-tangle list
cargo-tangle generate <key_name>
cargo-tangle import <key_file>
cargo-tangle export <key_name>
Sources:
The Blueprint framework includes comprehensive testing utilities for unit testing, integration testing, and end-to-end testing of blueprints.
- Test Environments:
- Tangle Testnet
- Anvil Testnet
- Eigenlayer Testnet
- Core Testing Utilities: Common functionality used by all protocol-specific utilities.
- Protocol-Specific Utilities: Specialized tools for:
- Tangle
- Anvil (EVM)
- Eigenlayer
- blueprint-chain-setup
- Tangle Setup
- Anvil Setup
The Blueprint framework includes a context system that provides protocol-specific functionality to blueprint applications through extension traits and types.
- Feature Contexts
- Protocol Contexts
- Contexts
- Generate Implementations
- blueprint-contexts
- blueprint-context-derive
- Tangle Context
- EVM Context
- Eigenlayer Context
- Networking Context
- Keystore Context
The context system allows extending blueprint applications with protocol-specific functionality without tightly coupling the application code to a specific protocol implementation.
When a blueprint is executed, the system follows a specific flow from the CLI through the various components to the target blockchain.
- User Interface
- Deploy/Run
- Fetch Source
- Provide Blueprint
- Initialize
- Route Jobs
- Execute
- Protocol-Specific Execution
- Interact (multiple instances)
- cargo-tangle CLI
- Blueprint Manager
- Blueprint Sources
- Blueprint Runner
- Job Router
- Job Handlers
- Job Results
- Protocol Adapters
- Tangle Execution
- EVM Execution
- Eigenlayer Execution
- Tangle Network
- EVM Networks
- Eigenlayer
This flow illustrates how a blueprint moves from deployment or execution command through the system to interact with the target blockchain, with protocol adapters handling the specifics of each blockchain platform.
- Issue: Syntax error in textmermaid version 11.6.0.
- Action: Ask Devin about the tangle-network/blueprint for further insights.
No significant content available.
- Overview: Overview
- Architecture Overview: Architecture Overview
- Key Concepts: Key Concepts
- Protocol Support: Protocol Support
- Installation: Installation
- Creating Your First Blueprint: Creating Your First Blueprint
- Example Blueprints: Example Blueprints
- Blueprint SDK: Blueprint SDK
- Core Components: Core Components
- Job System: Job System
- Router: Router
- Networking: Networking
- Keystore: Keystore
- Blueprint Runner: Blueprint Runner
- Runner Configuration: Runner Configuration
- Job Execution Flow: Job Execution Flow
- Blueprint Manager: Blueprint Manager
- Event Handling: Event Handling
- Blueprint Sources: Blueprint Sources
- CLI Reference: CLI Reference
- Blueprint Commands: Blueprint Commands
- Key Management Commands: Key Management Commands
- Deployment Options: Deployment Options
- Development: Development
- Build Environment: Build Environment
- Testing Framework: Testing Framework
- CI/CD: CI/CD
- Advanced Topics: Advanced Topics
- Networking Extensions: Networking Extensions
- Macro System: Macro System
- Custom Protocol Integration: Custom Protocol Integration
The Blueprint framework currently supports the following blockchain protocols:
- Tangle Network
- EVM-compatible Chains
- Eigenlayer
The Blueprint SDK uses Rust's feature flags system to control which protocol modules are included in your application:
| Protocol | Feature Flag | Description |
|---|---|---|
| Tangle Network | tangle |
Enables Tangle Network support |
| EVM | evm |
Enables EVM-compatible chain support |
| Eigenlayer | eigenlayer |
Enables Eigenlayer support (implies evm) |
- README.md
- CLI README.md
- Create Command
- Keys Command
- Eigenlayer Command
- Forge
- Main
- Eigenlayer Cargo.toml
- EVM Cargo.toml
- Tangle Cargo.toml
- Contexts Cargo.toml
- Manager Cargo.toml
- SDK Cargo.toml
- Anvil Cargo.toml
- Core Cargo.toml
- Eigenlayer Testing Cargo.toml
- Tangle Testing Cargo.toml
The Tangle Network is the primary protocol for blueprint deployment. Users can:
- Deploy blueprint code on-chain
- Register as operators for blueprints
- Request services from blueprints
- Submit jobs to blueprint services
- Automate event handling with the Blueprint Manager
The Blueprint framework provides integration with any EVM-compatible blockchain, including:
- Ethereum
- Layer 2 solutions
- Local Anvil instances for development
EVM support enables blueprint applications to interact with smart contracts, send transactions, and listen for events on these networks.
Eigenlayer support builds on EVM capabilities and adds specialized features for Actively Validated Services (AVS):
- BLS signature aggregation for efficient consensus
- ECDSA signature verification
- Operator state management
- Delegation and staking capabilities
The Blueprint SDK implements protocol support through feature flags and a modular architecture:
Protocol Integration Architecture
Core Components
Protocol Clients
Protocol Extensions
blueprint-client-tangle
blueprint-client-evm
blueprint-client-eigenlayer
blueprint-tangle-extra
blueprint-evm-extra
blueprint-eigenlayer-extra
Testing Utilities
blueprint-tangle-testing-utils
blueprint-anvil-testing-utils
blueprint-eigenlayer-testing-utils
The Eigenlayer client (blueprint-client-eigenlayer) extends EVM capabilities with:
- AVS contract interactions
- BLS signature aggregation
- Operator registry management
- Delegation and staking operations
The Eigenlayer client uses the eigensdk for integration with the Eigenlayer protocol.
The Blueprint framework includes a unified keystore interface that supports multiple key types:
Cross-Protocol Key Management
blueprint-keystore
Key Types
SR25519 (Tangle)
ED25519
ECDSA (Tangle/EVM)
BLS381
BLS377
BN254 (Eigenlayer)
Storage Backends
File Storage
In-Memory Storage
For Eigenlayer, multiple contract addresses are required:
ALLOCATION_MANAGER_ADDRESS=<0x...>
REGISTRY_COORDINATOR_ADDRESS=<0x...>
OPERATOR_STATE_RETRIEVER_ADDRESS=<0x...>
DELEGATION_MANAGER_ADDRESS=<0x...>
SERVICE_MANAGER_ADDRESS=<0x...>
STAKE_REGISTRY_ADDRESS=<0x...>
STRATEGY_MANAGER_ADDRESS=<0x...>
STRATEGY_ADDRESS=<0x...>
AVS_DIRECTORY_ADDRESS=<0x...>
REWARDS_COORDINATOR_ADDRESS=<0x...>
PERMISSION_CONTROLLER_ADDRESS=<0x...>
The Blueprint framework provides testing utilities for each supported protocol to facilitate local development and testing.
The Tangle client (blueprint-client-tangle) provides:
- Blueprint deployment and management
- Service registration and requests
- Job submission
- Event monitoring via WebSocket subscriptions
Implementation details: Link
The EVM client (blueprint-client-evm) provides:
- Contract deployment and interaction
- Transaction submission and monitoring
- Event filtering and subscription
- Block and transaction data retrieval
The EVM client uses the Alloy libraries for type-safe interactions with EVM chains.
Implementation details: Link
Key management utilities are provided through the CLI:
cargo tangle key generate --key-type <KEY_TYPE>
cargo tangle key import --key-type <KEY_TYPE> --keystore-path <PATH> --protocol <PROTOCOL>
cargo tangle key list --keystore-path <PATH>
- Tangle: Primarily uses SR25519 keys for account operations and ECDSA for certain compatibility scenarios.
- EVM: Uses ECDSA (secp256k1) keys.
- Eigenlayer: Uses both ECDSA keys and BLS keys (BN254) for signature aggregation.
Each protocol requires specific configuration settings, which can be provided through settings files or environment variables.
- HTTP RPC URL
- WebSocket RPC URL
- Keystore URI
- Data Directory
- Blueprint ID
- Service ID (Optional)
- Registry Coordinator Address
- Operator State Retriever Address
- Delegation Manager Address
- Service Manager Address
- etc.
For Tangle, the following settings are required:
BLUEPRINT_ID=<Blueprint ID>
SERVICE_ID=<Service ID> # Optional
Sources: 737-821
| Protocol | Crate | Features |
|---|---|---|
| Tangle | blueprint-tangle-testing-utils |
Local testnet, blueprint deployment, job execution |
| EVM | blueprint-anvil-testing-utils |
Anvil integration, contract deployment |
| Eigenlayer | blueprint-eigenlayer-testing-utils |
AVS testing, BLS aggregation testing |
The CLI provides commands for working with different protocols through the blueprint deploy and blueprint run commands.
# Example command for deployment
blueprint deploy <options>Tangle Deployment Options:
- HTTP RPC URL
- WebSocket RPC URL
- Package name
- Local devnet option
- Keystore path
Eigenlayer Deployment Options:
- RPC URL
- Contracts path
- Network (local, testnet, mainnet)
- Local devnet option
- Keystore path
The blueprint run command supports both Tangle and Eigenlayer protocols:
cargo tangle blueprint run --protocol <PROTOCOL> --rpc-url <URL> --settings-file <PATH>
When running with the Eigenlayer protocol, the command compiles and executes an AVS binary with appropriate configuration.
The Blueprint framework allows creating protocol-specific blueprints using templates:
cargo tangle blueprint create --name <NAME> [--blueprint-type <TYPE>]
Available blueprint types:
tangle: Standard Tangle blueprinteigenlayer-bls: Eigenlayer AVS with BLS signature aggregationeigenlayer-ecdsa: Eigenlayer AVS with ECDSA verification
Sources: 16-93
The Blueprint framework provides context extensions that enable blueprints to access protocol-specific functionality:
| Protocol | Context Extension | Features |
|---|---|---|
| Tangle | TangleContextExtension |
Access to Tangle client and services |
| EVM | EvmContextExtension |
Access to EVM clients and contracts |
| Eigenlayer | EigenlayerContextExtension |
Access to Eigenlayer functionality |
Sources: 27-36
The Blueprint framework's protocol support enables developers to build applications that work across multiple blockchain environments. By leveraging the modular architecture and protocol-specific extensions, applications can be built once and deployed to various supported protocols with minimal changes. For information on how to get started with a specific protocol, refer to the Getting Started section.
No significant content available.
- Overview: Overview
- Architecture Overview: Architecture Overview
- Key Concepts: Key Concepts
- Protocol Support: Protocol Support
- Installation: Installation
- Creating Your First Blueprint: Creating Your First Blueprint
- Example Blueprints: Example Blueprints
- Blueprint SDK: Blueprint SDK
- Core Components: Core Components
- Job System: Job System
- Router: Router
- Networking: Networking
- Keystore: Keystore
- Blueprint Runner: Blueprint Runner
- Runner Configuration: Runner Configuration
- Job Execution Flow: Job Execution Flow
- Blueprint Manager: Blueprint Manager
- Event Handling: Event Handling
- Blueprint Sources: Blueprint Sources
- CLI Reference: CLI Reference
- Blueprint Commands: Blueprint Commands
- Key Management Commands: Key Management Commands
- Deployment Options: Deployment Options
- Development: Development
- Build Environment: Build Environment
- Testing Framework: Testing Framework
- CI/CD: CI/CD
- Advanced Topics: Advanced Topics
- Networking Extensions: Networking Extensions
- Macro System: Macro System
- Custom Protocol Integration: Custom Protocol Integration
Developers can create new examples using the Blueprint CLI:
cargo tangle blueprint create --name my_example
The CLI supports different blueprint types through templates:
- Basic Tangle templates
- Eigenlayer BLS templates
- Eigenlayer ECDSA templates
These templates provide the necessary structure and boilerplate code for different protocols and use cases.
- README.md
- CLI README
- Create Command
- Keys Command
- Eigenlayer Command
- EVM Extra Utilities
- Runner Cargo.toml
- Runner Config
- BLS Implementation
- ECDSA Implementation
The "Incredible Squaring" blueprint demonstrates core concepts of the Blueprint SDK by implementing a service that squares numbers submitted through jobs.
Blueprint SDK Dependencies
Incredible Squaring Blueprint
Workspace
Uses
- incredible-squaring-bin
Depends on
- blueprint-router
- blueprint-runner
- blueprint-core
Examples can be run using the Blueprint CLI, which provides commands for building, deploying, and running blueprints.
# Example CLI commands for deployment
blueprint-cli build
blueprint-cli deploy
blueprint-cli runThese examples provide working code that developers can run, modify, and use as templates for their own projects.
The workspace structure separates the core business logic (the library) from the application entry point (the binary), promoting code reuse and testability.
The following diagram illustrates how jobs flow through the Incredible Squaring blueprint:
"Tangle Consumer" -> "Tangle Producer" -> "Squaring Job" -> "Job Router" -> "Blueprint Runner" -> "Client"
Submit number to square -> Create Job -> Call with number -> Route Job -> Call based on ID -> Execute squaring function
Return squared result -> Return Job Result -> Send Job Result -> Return squared number
The Blueprint SDK supports multiple blockchain protocols, with examples demonstrating how to register as a service provider, handle protocol-specific data, and integrate with different blockchain environments.
The example blueprints demonstrate integration with various protocols through a common runner architecture:
Example Implementations
Protocol Configurations
Blueprint Runner
configure
implements
implements
implements
BlueprintRunnerBuilder
FinalizedBlueprintRunner
TangleConfig
EigenlayerBLSConfig
EigenlayerECDSAConfig
BlueprintConfig trait
Incredible Squaring
Sources:
The Blueprint SDK includes examples for the Tangle Network, demonstrating how to:
- Register as an operator
- Handle service requests
- Process jobs on the Tangle Network
For Eigenlayer, the examples demonstrate:
- BLS-based operator registration and service execution
- ECDSA-based operator registration and service execution
- Integration with Eigenlayer contracts and middleware
-
Create Blueprint
cargo tangle blueprint create
-
Build Blueprint
cargo build
-
Deploy Blueprint
cargo tangle blueprint deploy
-
Run Blueprint
cargo tangle blueprint run
Example blueprints demonstrate secure key management through the Blueprint SDK's keystore system.
- Protocol Registration: The deployment process sets up the necessary protocol-specific configuration and registers the blueprint with the selected network.
- Sources: Documentation Links, Runner Source
To generate keys, use the following command:
cargo tangle blueprint generate-keysKeys can be imported from existing sources.
- SR25519 Keys: For Tangle
- ECDSA Keys: For Ethereum/Eigenlayer
- BLS/BN254 Keys: For BLS Signatures
- Transaction Signing: Essential for verifying transactions.
- P2P Networking: Important for secure communication between nodes.
The example blueprints are designed to be extensible. Developers can:
- Modify the job handling logic.
- Add custom background services.
- Implement additional protocol integrations.
- Create custom producers and consumers.
Example Extension: To extend the Incredible Squaring example with a cubing function:
- Add a new job handler function.
- Register it with a new job ID in the router.
- Deploy the updated blueprint.
The following table summarizes the key examples available in the Blueprint SDK:
| Example Name | Protocol | Main Functionality | Key Features |
|---|---|---|---|
| Incredible Squaring | Tangle | Number squaring | Job routing, basic producer/consumer |
| Incredible Squaring | Eigenlayer (BLS) | Number squaring | BLS signatures, Eigenlayer registration |
| Incredible Squaring | Eigenlayer (ECDSA) | Number squaring | ECDSA signatures, Eigenlayer registration |
Sources:
- Ask Devin about tangle-network/blueprint
- Conduct deep research
No significant content available.
- Overview
- Architecture Overview
- Key Concepts
- Protocol Support
- Getting Started
- Installation
- Creating Your First Blueprint
- Example Blueprints
- Blueprint SDK
- Core Components
- Job System
- Router
- Networking
- Keystore
- Blueprint Runner
- Runner Configuration
- Job Execution Flow
- Blueprint Manager
- Event Handling
- Blueprint Sources
- CLI Reference
- Blueprint Commands
- Key Management Commands
- Deployment Options
- Development
- Build Environment
- Testing Framework
- CI/CD
- Advanced Topics
- Networking Extensions
- Macro System
- Custom Protocol Integration
For more details on how these components interact, see:
- Cargo.lock
- Cargo.toml
- cli/Cargo.toml
- clients/eigenlayer/Cargo.toml
- clients/evm/Cargo.toml
- clients/tangle/Cargo.toml
- contexts/Cargo.toml
- context.rs
- metadata.rs
- call.rs
- id.rs
- mod.rs
- into_job_result.rs
- result/mod.rs
- lib.rs
- manager/Cargo.toml
- sdk/Cargo.toml
- anvil/Cargo.toml
- core/Cargo.toml
- eigenlayer/Cargo.toml
- tangle/Cargo.toml
The Core Components provide the foundation of the Blueprint SDK, defining the fundamental abstractions and types that power the job-based processing model. This model allows for a unified way to handle requests from various sources, including blockchain events, timers, or network messages.
The Core Components offer a unified programming model for handling events consistently across different blockchain environments. By abstracting event handling into Jobs, JobCalls, and JobResults, Blueprint enables developers to write protocol-agnostic code deployable across various blockchain networks. Understanding these components is essential for effectively using the Blueprint SDK to build decentralized applications.
-
Job Handling:
- Generates, routed to, matches JobId with, processes using, extract data for, returns value, converted via, produces, consumed by.
- Components involved:
- Producer (Event Source)
- JobCall (with JobId)
- Router
- Job Handler
- Extractors
- Handler Function
- Value
- IntoJobResult
- JobResult
- Consumer
-
Code Examples:
// Job that extracts data from the call async fn data_job(body: Bytes) -> String { format!("Received: {}", String::from_utf8_lossy(&body)) } // Job that uses context async fn context_job(Context(ctx): Context<AppContext>) -> String { format!("Using context: {}", ctx.name) }
-
JobId:
JobIdis a unique identifier used to route job calls to the appropriate handler. It's a 256-bit (32-byte) identifier that can be created from various types.- Methods:
from()into()
- Types supported:
u8u32Stringstr
The core components are built around several key abstractions:
| Abstraction | Description |
|---|---|
Job |
Trait for async functions that handle job requests |
JobId |
Unique identifier for routing job calls |
JobCall |
Representation of a job call event with metadata |
JobResult |
Result of job execution |
IntoJobResult |
Trait for converting values to JobResult |
| Extractors | Types that obtain data from job calls |
The Job trait is the cornerstone of the Blueprint processing model. It represents an async function that can handle job calls and produce results. The Job trait is automatically implemented for async functions with appropriate signatures, allowing them to be used directly as handlers:
// Simple job with no arguments
async fn simple_job() -> String {
"Hello, world!".to_string()
}JobId can be created from:
- Primitive integers (u8, u16, u32, u64, etc.)
- Strings (automatically hashed)
- Byte arrays
- Other types with specific implementations
Example usage:
// From numeric literals
const MY_JOB_ID: u32 = 1;
let job_id = JobId::from(MY_JOB_ID);
// From strings (hashed internally)
let str_job_id = JobId::from("my-job-name");JobCall represents a job call event that needs to be processed. It contains two main components:
- A header (
Parts) with job metadata - A body that holds the actual job data
The Parts structure contains:
job_id: The identifier for routingmetadata: Key-value pairs for job-specific metadataextensions: Storage for request-specific data (similar to middleware)
A special Void type is provided for jobs that intentionally don't produce a result:
async fn no_result_job() -> Void {
// Do something but don't return a result
}Extractors are types that extract data from JobCalls, making it easy to access specific parts of the request in job handlers. The framework provides two main extractor traits:
«trait»
FromJobCall<Ctx, M>
+ Rejection associated_type
+ from_job_call(JobCall, &Ctx) : Result
«trait»
FromJobCallParts<Ctx>
+ Rejection associated_type
+ from_job_call_parts(Parts, &Ctx) : ResultContext
- S value Metadata
- MetadataMap value Bytes
Sources:
Example creating a job call:
// Create a job call with an integer ID and string body
let call = JobCall::new(123, "job data");
// Create a job call with a string ID (hashed internally)
let call = JobCall::new("my-job", Bytes::from("job data"));JobResult represents the outcome of job execution. It can be either:
Okcontaining a successful result with a body and metadataErrcontaining an error
The IntoJobResult trait allows various types to be automatically converted to JobResult, making it convenient to return different types from job handlers:
// String is converted to JobResult
async fn string_job() -> String {
"Hello, world!".to_string()
}
// Result is converted to JobResult
async fn fallible_job() -> Result<String, MyError> {
Ok("Success".to_string()) // or Err(MyError::Failure)
}Any error type that implements std::error::Error + Send + Sync + 'static can be returned from a job handler:
// Custom error type
#[derive(Debug)]
enum MyError {
InvalidInput,
ProcessingFailed,
}
impl std::fmt::Display for MyError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
// Implementation
}
}
impl std::error::Error for MyError {}
// Job that can return an error
async fn fallible_job() -> Result<String, MyError> {
if true {
Ok("Success".to_string())
} else {
Err(MyError::InvalidInput)
}
}The IntoJobResult trait implementation for Result handles the conversion to JobResult::Ok or JobResult::Err automatically.
The core components integrate with other parts of the Blueprint system:
## Protocol Integrations
### Core Components of Protocol Integrations
- **Router**: Routes `JobCall`s to the appropriate job handlers.
- **Runner**: Executes and manages jobs.
- **Clients**:
- Generate `JobCall`s from blockchain events.
- Process `JobResult`s.
- **Networking**: Transfers `JobCall`s and `JobResult`s between nodes.
- **Keystore**: Provides cryptographic services for job processing.
### Code Snippets
```plaintext
* **Router** : Uses core components to route `JobCall`s to the appropriate job handlers
* **Runner** : Uses core components to execute and manage jobs
* **Clients** : Generate `JobCall`s from blockchain events and process `JobResult`s
* **Networking** : Can transfer `JobCall`s and `JobResult`s between nodes
* **Keystore** : Provides cryptographic services for job processing
-
Context<T>: Provides access to application contextasync fn handler(Context(ctx): Context<AppState>) { // Use ctx }
-
Metadata: Extracts metadata from the job callasync fn handler(Metadata(metadata): Metadata) { // Access metadata }
-
Bytes: Extracts the raw body bytesasync fn handler(body: Bytes) { // Process raw bytes }
Blueprint's core components provide flexible error handling through the Result type and automatic conversion to JobResult:
// Example of error handling
returns
converted via
produces
Job Handler
Result
IntoJobResult
JobResult::Ok or JobResult::Err- Issue: Syntax error in textmermaid version 11.6.0 (repeated multiple times).
- Recommendation: Ask Devin about tangle-network/blueprint for further insights.
Relevant source files:
- Cargo.lock
- Cargo.toml
- README.md
- CHANGELOG.md
- cli Cargo.toml
- cli README.md
- create mod.rs
- keys.rs
- eigenlayer.rs
To start using the Blueprint framework, you first need to install the cargo-tangle CLI tool:
curl --proto '=https' --tlsv1.2 -LsSf https://github.com/tangle-network/gadget/releases/download/cargo-tangle/v0.1.1-beta.7/cargo-tangle-installer.sh | shor install from source:
cargo install cargo-tangle --git https://github.com/tangle-network/gadget --forceOnce installed, you can create your first blueprint:
# Create a new blueprint named "my_blueprint"
cargo tangle blueprint create --name my_blueprint
# Navigate into the blueprint directory and build
cd my_blueprint
cargo build
# Deploy your blueprint to the Tangle Network
cargo tangle blueprint deploy --rpc-url wss://rpc.tangle.tools --package my_blueprintFor more detailed information on how to get started, see Getting Started.
Blueprint's architecture is built around a modular, extensible core system with specialized components for different blockchain environments.
- CLI (
cargo-tangle): Command-line interface for creating, managing, and deploying blueprints. - Blueprint SDK: Core toolkit with various components for different functionalities.
- Blueprint Runner: Executes blueprint operations in a protocol-specific manner.
- Blueprint Manager: Orchestrates blueprint lifecycle, handling events and sources.
- Protocol Integrations: Specialized clients for interacting with different blockchain networks.
For more detailed information about specific aspects of the framework, see the related documentation pages on:
- Overview
- Framework Architecture
- SDK Components
- Job and Router System
- Blueprint Manager and Runner
- Protocol Support
- Networking and Cryptography
- Networking
- Cryptography
- Keystore
- Getting Started
- Summary
- Core: Fundamental abstractions for the job system and blueprint building blocks
- Router: Directs job requests to appropriate handlers
- Crypto: Multi-chain cryptographic primitives for various key types and signature schemes
- Clients: Protocol-specific clients for blockchain interactions
- Networking: P2P networking capabilities based on libp2p
- Keystore: Secure key management for multiple key types
At the heart of the Blueprint framework is the job system, which provides a unified way to handle various tasks across different protocols.
Job Execution Flow
Route Match
No Match
Always Route
JobCall (with JobId)
Router
Job Handler
Fallback Handler
Always Handler
JobResult
- Blueprint Manager:
- Monitors events from blockchain networks
- Manages the lifecycle of blueprint services
- Fetches and spawns blueprints based on events
- Blueprint Runner:
- Configures the execution environment
- Coordinates job execution
- Provides protocol-specific runtime capabilities
The Blueprint framework supports multiple blockchain protocols through specialized client implementations and protocol-specific extensions.
Deployment
Protocol Clients
Deploy Tangle
Deploy EVM
Deploy Eigenlayer
Protocol Extensions
blueprint-tangle-extra
blueprint-evm-extra
blueprint-eigenlayer-extra
blueprint-clients
blueprint-client-tangle
blueprint-client-evm
blueprint-client-eigenlayer
cargo-tangle CLI
The Blueprint framework provides a comprehensive set of tools for building decentralized applications across multiple blockchain networks. Key advantages include:
- Protocol Agnostic: Write once, deploy to multiple blockchain environments
- Modular Design: Use only the components you need
- Extensible: Add custom functionality through middleware and extensions
- Secure: Strong cryptographic foundations and key management
- Networked: Built-in P2P networking capabilities
The Blueprint SDK is highly modular, consisting of multiple crates that can be used independently:
Blueprint SDK
blueprint-core
blueprint-router
blueprint-runner
blueprint-crypto
blueprint-keystore
blueprint-clients
blueprint-networking
blueprint-stores
crypto-core
crypto-k256
crypto-sr25519
crypto-ed25519
crypto-bls
crypto-bn254
client-core
client-tangle
client-evm
client-eigenlayer
The job system consists of:
- JobCall: A request to execute a specific job with associated data
- JobId: A unique identifier for jobs
- Router: Examines incoming JobCalls and directs them to appropriate handlers
- Handlers: Functions that execute the logic for specific jobs
- JobResult: The result returned after job execution
This design allows for flexible extensibility and composition of behaviors through middleware and routing configurations.
The Blueprint Manager and Runner work together to handle the lifecycle of blueprints from deployment to execution.
Event Flow
Blueprint Runner
Source Handlers
Blueprint Manager
Configure
Fetch & Spawn
Execute
Event Handler
Blueprint Manager
Blueprint Source Handler
GitHub Source
Container Source
Test Source
Blueprint Runner Builder
Finalized Runner
Blockchain Events
Job Execution
The Blueprint framework is a toolkit for building, deploying, and managing decentralized applications (dApps) across multiple blockchain environments. It supports multiple blockchain protocols, including Tangle Network, Eigenlayer, and EVM-compatible chains, enabling seamless application operation across these environments. For detailed protocol integrations, see Protocol Support.
Each protocol integration includes:
- Client Library: For interacting with the blockchain
- Protocol Extensions: Additional functionality specific to that protocol
- Configuration: Protocol-specific settings and options
- Deployment Tools: CLI commands for deploying to that environment
- Tangle Network: Native support for the Tangle blockchain
- EVM-compatible chains: Support for Ethereum and other EVM chains
- Eigenlayer: Support for Eigenlayer's consensus mechanisms
- Keystore
- Local Signer
- Remote Signer
- Hardware Signer
- Crypto Modules:
- K256 (secp256k1)
- SR25519
- ED25519
- BLS
- BN254
The networking layer is built on libp2p and provides:
- Peer-to-peer communication: Direct communication between nodes
- Discovery: Finding and connecting to peers
- Extensions: Support for specialized protocols like aggregated signatures and round-based protocols
- Network Service
- Gadget Behaviour
- Peer Manager
The cryptography modules support multiple signature schemes and key types:
- K256 (secp256k1): Used for Ethereum and other EVM chains
- SR25519: Used for Tangle and other Substrate-based blockchains
- ED25519: General-purpose EdDSA implementation
- BLS: Boneh-Lynn-Shacham signatures for aggregated signing
- BN254: Barreto-Naehrig curves for zero-knowledge proofs
The keystore system provides:
- Secure key storage: Safe management of cryptographic keys
- Multiple backends: File-based, in-memory, and remote options
- Signature generation: Protocol-specific signing capabilities
- Ask Devin about tangle-network/blueprint
- Deep Research
No relevant content available.
- Overview: Overview
- Architecture Overview: Architecture Overview
- Key Concepts: Key Concepts
- Protocol Support: Protocol Support
- Installation: Installation
- Creating Your First Blueprint: Creating Your First Blueprint
- Example Blueprints: Example Blueprints
- Blueprint SDK: Blueprint SDK
- Core Components: Core Components
- Job System: Job System
- Router: Router
- Networking: Networking
- Keystore: Keystore
- Blueprint Runner: Blueprint Runner
- Runner Configuration: Runner Configuration
- Job Execution Flow: Job Execution Flow
- Blueprint Manager: Blueprint Manager
- Event Handling: Event Handling
- Blueprint Sources: Blueprint Sources
- CLI Reference: CLI Reference
- Blueprint Commands: Blueprint Commands
- Key Management Commands: Key Management Commands
- Deployment Options: Deployment Options
- Development: Development
- Build Environment: Build Environment
- Testing Framework: Testing Framework
- CI/CD: CI/CD
- Advanced Topics: Advanced Topics
- Networking Extensions: Networking Extensions
- Macro System: Macro System
- Custom Protocol Integration: Custom Protocol Integration
Blueprints are Infrastructure-as-Code templates that allow developers to build crypto services quickly. They provide the structure and behavior for applications running on various blockchain protocols.
- EVM Blockchains
- Tangle Network
- Eigenlayer
- creates
- executes
- routed by
- deploys to
- Service Instance
- Jobs
- Router
Relevant source files:
- README.md
- CLI README.md
- Create Command
- Keys Command
- Eigenlayer Command
- Context Extraction
- Metadata Extraction
- Job Call
- Job ID
- Job Module
- Job Result Into
- Job Result Module
- Core Library
- EVM Extra Utilities
- Runner Cargo.toml
- Runner Config
- Eigenlayer BLS
- Eigenlayer Config
- Eigenlayer ECDSA
- Eigenlayer Error
- Runner Error
- Runner Library
- Tangle Config
- Tangle Error
- Tangle Extra Cargo.toml
- Incredible Squaring Cargo.lock
- Incredible Squaring Cargo.toml
A blueprint defines the behavior of a service that processes jobs through a router, deployable to various blockchain environments like Tangle Network, Eigenlayer, and EVM-compatible chains.
The job system is the core mechanism for handling work within a blueprint. Jobs are async functions that process requests and produce results.
Job Call Structure
Job Components
contains
routed by
executes job
produces
contains
contains
contains
consists of
consists of
JobCall
JobId
JobResult
Router
Job Handler
Parts (Header)
Body (Arguments)
Metadata
Extensions
JobCall
The Router inspects the JobId of incoming JobCall objects and routes them to the appropriate handler. It supports:
- Exact matches: Routes a job call to a specific handler based on its
JobId - Fallback routes: Handles job calls when no specific handler exists for a
JobId - Always routes: Executes for every job call, regardless of
JobId - Middleware: Applies transformations or validations to job calls before they reach handlers
The Blueprint Runner orchestrates the execution of jobs and manages the lifecycle of a blueprint service.
A JobCall is a request to execute a specific job. It consists of:
- Header (Parts): Contains the
JobIdand metadata about the call. - Body: Contains the job arguments in a format specific to the producer.
A JobId is a unique identifier for each job registered with the system. It can be created from various primitive types like integers, strings, or byte arrays.
Job handlers are async functions that implement the Job trait. They process JobCall requests and return JobResult objects.
// Simple job handler example
async fn job_handler() -> String {
"Hello, World!".to_string()
}A JobResult represents the outcome of a job execution. It can be either a success (Ok) with a body and metadata, or an error (Err).
The Router is responsible for directing job calls to the appropriate handlers based on the JobId.
Router Configuration
router.route(job_id, job)
router.fallback(job)
router.always(job)
router.layer(middleware)
- No Match
- Always Route
- JobCall (with JobId)
- Job Handler
- Fallback Handler
- Always Handler
The Blueprint Runner:
- Receives job calls from producers
- Routes them to appropriate handlers using the Router
- Collects the results and sends them to consumers
- Manages background services that run alongside the job processing flow
- Producers: Generate
JobCallobjects that trigger job execution - Consumers: Receive
JobResultobjects produced by completed jobs
Background services run alongside the job processing pipeline and perform ongoing tasks. They implement the BackgroundService trait.
The Blueprint framework supports multiple blockchain protocols through specialized protocol settings and configurations.
- Tangle Network: Substrate-based protocol supporting WebAssembly smart contracts.
- Eigenlayer: EVM-based protocol with specialized cryptographic requirements (BLS, ECDSA).
- EVM: General Ethereum Virtual Machine support.
- BlueprintConfig Trait
- TangleConfig
- EigenlayerBLSConfig
- EigenlayerECDSAConfig
- ProtocolSettings
- TangleProtocolSettings
- EigenlayerProtocolSettings
- BlueprintEnvironment
For each protocol, there are specialized settings and configurations that determine how blueprints interact with the blockchain.
The BlueprintEnvironment describes the context in which a blueprint runs, including:
- RPC Endpoints
- Keystore Information
- Protocol-Specific Settings
BlueprintEnvironment
├── http_rpc_endpoint
├── ws_rpc_endpoint
├── keystore_uri
├── data_dir
└── protocol_settings
├── networking settings
├── ProtocolSettings
├── TangleProtocolSettings
└── EigenlayerProtocolSettings
Sources:
The environment provides all necessary information for a blueprint to connect to blockchain networks, handle cryptographic operations, and access required resources.
The Keystore manages cryptographic keys used by blueprints for various operations like signing transactions and verifying messages.
Keystore System
supports
uses
includes
Keystore
Key Types
Storage Backends
SR25519
ED25519
K256ECDSA
BLS
File Storage
In-Memory Storage
Different protocols require different key types:
- Tangle Network : SR25519, ECDSA
- Eigenlayer : ECDSA (secp256k1), BLS (BN254)
- EVM : ECDSA (secp256k1)
Extractors are components that extract data from job calls to be used as arguments in job handlers.
- JobCall
- FromJobCall trait
- FromJobCallParts trait
- Body Type Extractors
- Metadata Extractors
- Context Extractors
- Context: Access to the global context of the blueprint
- Metadata: Access to metadata attached to a job call
- Body: Access to the job call's body in various formats
When implementing a blueprint, developers typically:
- Define job handlers for specific tasks
- Configure a router to map job IDs to handlers
- Set up producers to generate job calls
- Configure consumers to process job results
- Implement protocol-specific registration if needed
Runtime Execution
generate
processed by
dispatches to
produce
sent to
Producers
Job Calls
Router
Job Handlers
Job Results
Consumers
Blueprint Implementation Flow
next
next
next
next
next
Define Job Handlers
Configure Router
Setup Producers
Setup Consumers
Register with Protocol
Run Blueprint
The Blueprint CLI (cargo-tangle) provides commands for creating, building, deploying, and managing blueprints on supported protocols.
Blueprints need to be registered with their target protocol before they can be used. The registration process varies by protocol:
- Tangle Network: Register as an operator for a specific blueprint ID
- Eigenlayer BLS: Register with BLS cryptographic verification
- Eigenlayer ECDSA: Register with ECDSA cryptographic verification
The Blueprint framework provides comprehensive error handling through a hierarchical error system:
Error Hierarchy
- RunnerError
- Keystore Error
- Networking Error
- Config Error
- JobCall Error
- Producer Error
- Consumer Error
- Protocol-Specific Errors
- TangleError
- EigenlayerError
This error system helps developers identify and handle issues at various levels of the blueprint execution flow.
- Ensure you have the necessary dependencies installed.
- Follow the commands specific to your platform for installation.
curl -sSL https://example.com/install.sh | bashgit clone https://github.com/tangle-network/blueprint.git
cd blueprint
cargo build --release- Set up your development environment according to the guidelines.
- Follow the instructions to create your first blueprint.
cargo test- Use the following command to deploy your blueprint.
cargo run --release -- deployblueprint keys generate- Set the required environment variables as specified in the documentation.
- Use the CLI commands to interact with your deployed blueprints.
- Instructions for setting up a Nix-based development environment.
- Explore advanced topics and additional features in the documentation.
- Rust Toolchain (1.86+)
- OpenSSL Development Packages
- Build Essentials
- pkg-config
- Foundry (for EVM development)
- Protocol Buffers (for networking extensions)
- Node.js (for contract testing)
For Ubuntu/Debian:
sudo apt update && sudo apt install build-essential cmake libssl-dev pkg-configFor macOS:
brew install openssl cmakeFor Rust (all platforms):
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | shThe Blueprint CLI (cargo-tangle) can be installed using two options:
curl --proto '=https' --tlsv1.2 -LsSf https://github.com/tangle-network/gadget/releases/download/cargo-tangle/v0.1.1-beta.7/cargo-tangle-installer.sh | shcargo install cargo-tangle --git https://github.com/tangle-network/gadget --forceVerify the installation:
cargo tangle --versionEnsure you have the following prerequisites installed on your system:
You can configure certain aspects of the Blueprint environment using environment variables:
| Variable | Description | Example |
|---|---|---|
SIGNER |
SURI of the Substrate signer account | export SIGNER="//Alice" |
EVM_SIGNER |
Private key of the EVM signer account | export EVM_SIGNER="0xcb6df9..." |
These environment variables can be used instead of a keystore for deployment operations.
The Nix environment includes all required dependencies including Rust toolchain, OpenSSL, and Foundry.
- Learn about the detailed installation options.
- Follow the step-by-step guide to creating your first blueprint.
- Explore the example blueprints.
- Dive deeper into the Blueprint SDK components.
cargo tangleblueprint createcargo buildcargo testblueprint deployblueprint runblueprint list-blueprintsblueprint request-service
After installing the CLI, you can create a new blueprint project:
# Create a new blueprint named "my_blueprint"
cargo tangle blueprint create --name my_blueprint
# Navigate into the blueprint directory
cd my_blueprintThe blueprint create command generates a new project from a template repository. By default, it creates a Tangle Network blueprint, but you can specify other blueprint types using the --type flag.
The command uses cargo-generate with pre-defined templates based on your chosen blueprint type:
| Blueprint Type | Template Repository |
|---|---|
| Tangle (default) | https://github.com/tangle-network/blueprint-template |
| Eigenlayer BLS | https://github.com/tangle-network/eigenlayer-bls-template |
| Eigenlayer ECDSA | https://github.com/tangle-network/eigenlayer-ecdsa-template |
Once your blueprint is created, you can build and test it:
# Build the blueprint
cargo build
# Run the tests
cargo testBlueprint provides a robust key management system through its keystore functionality. The keystore securely manages various types of cryptographic keys used in different protocols.
Keystore
+new(config: KeystoreConfig) : Result<Keystore>
+generate<T: KeyType>(seed: Option<&[u8]>) : Result<T::Public>
+insert<T: KeyType>(secret: &T::Secret) : Result<()>
+sign_with_local<T: KeyType>(public: &T::Public, msg: &[u8]) : Result<T::Signature>
+list_local<T: KeyType>() : Result<Vec<T::Public>>
+get_secret<T: KeyType>(public: &T::Public) : Result<T::Secret>
You can generate keys using the CLI:
cargo tangle blueprint generate-keys -k <KEY_TYPE> -p <PATH> -s <SEED> --show-secretWhere:
KEY_TYPE: The key type to generate (sr25519, ecdsa, bls_bn254, ed25519, bls381)PATH: Path to store the generated keypair (optional)SEED: Seed to use for generation (optional)--show-secret: Display the private key (optional)
The build process compiles your Rust code and any smart contracts included in your project. If the project includes Foundry-based contracts, they will be compiled as part of the build process.
After building your blueprint, you can deploy it to a blockchain network:
# Deploy to the Tangle Network
cargo tangle blueprint deploy --rpc-url wss://rpc.tangle.tools --package my_blueprint
# Or deploy to a local testnet for development
cargo tangle blueprint deploy tangle --devnet --package my_blueprintThe deployment process includes:
- Connecting to the specified network
- Using a keystore to sign transactions
- Deploying the blueprint to the network
- Returning information about the deployed blueprint
After deployment, you can interact with your blueprints using various CLI commands:
| Command | Description |
|---|---|
blueprint list-blueprints |
List available blueprints |
blueprint list-requests |
List service requests |
blueprint register |
Register as a provider |
blueprint request-service |
Request a service |
blueprint accept |
Accept a service request |
blueprint reject |
Reject a service request |
blueprint run |
Run a blueprint |
blueprint submit |
Submit a job to a blueprint |
For reproducible development environments, Blueprint provides a Nix flake configuration:
nix develop
- Ask Devin about tangle-network/blueprint
- Conduct deep research
- Overview
- Architecture Overview
- Key Concepts
- Protocol Support
- Getting Started
- Installation
- Creating Your First Blueprint
- Example Blueprints
- Blueprint SDK
- Core Components
- Job System
- Router
- Networking
- Keystore
- Blueprint Runner
- Runner Configuration
- Job Execution Flow
- Blueprint Manager
- Event Handling
- Blueprint Sources
- CLI Reference
- Blueprint Commands
- Key Management Commands
- Deployment Options
- Development
- Build Environment
- Testing Framework
- CI/CD
- Advanced Topics
- Networking Extensions
- Macro System
- Custom Protocol Integration
The Job System is built around several key abstractions that work together to process and execute tasks:
JobId is a unique identifier for each job, allowing the router to direct job calls to the appropriate handler.
A JobResult represents the outcome of a job execution, which can be either a success (Ok) with a body and metadata, or an error (Err).
The job system processes jobs through a series of steps from receipt to completion:
Consumer -> Job Handler -> Router -> BlueprintRunner -> Producer
Jobs can be implemented as simple async functions:
// A job that immediately returns a string
async fn hello_job() -> String {
"Hello, World!".to_string()
}
// A job that processes input data
async fn echo_job(body: Bytes) -> Result<String, String> {
String::from_utf8(body.to_vec()).map_err(|_| "Invalid UTF-8".to_string())
}
// A job that uses context
async fn context_job(Context(ctx): Context<AppState>) -> String {
format!("Using context: {}", ctx.value)
}The job system provides extractors to obtain data from job calls, making it easier to work with job inputs. The IntoJobResult trait provides automatic conversion for many common types:
String,&str- Converted toBytesBytes,BytesMut- Used directly()(unit) - Converted to emptyBytesResult<T, E>- Success converted viaIntoJobResult, error converted toBoxErrorVoid- Produces no result (returnsNone)
The Job System is a core part of the Blueprint Runner, orchestrating the entire execution flow.
The job system supports different protocols through specific configurations and registration mechanisms.
- Protocol Support
register()BlueprintConfig Trait- Registration Logic
TangleConfigEigenlayerConfig
Each protocol implements the BlueprintConfig trait, which provides:
- Registration logic for the protocol
- Checking if registration is required
- Determining if the runner should exit after registration
- [crates/runner/src/tangle/config.rs]
- [crates/runner/src/eigenlayer/config.rs]
- [crates/runner/src/eigenlayer/bls.rs]
- [crates/runner/src/eigenlayer/ecdsa.rs]
The job system includes comprehensive error handling for different components.
JobId:[u64; 4]ZERO: JobIdMIN: JobIdMAX: JobIdfrom(value) : JobIdinto() : T
Convertible from/to:
- Numeric types:
u8,u16,u32,u64,u128,i8,i16,i32,i64,i128,usize,isize - Byte arrays:
[u8; 32] - Strings:
&str,String - Byte slices:
&[u8],Vec
JobCall:Parts headT bodynew(job_id, body) : JobCallfrom_parts(parts, body) : JobCalljob_id() : JobIdmetadata() : MetadataMapextensions() : Extensionsinto_parts() : (Parts, T)map(F) : JobCall<U>
Parts:
JobId job_idMetadataMap metadataExtensions extensionsnew(job_id) : Parts
- Producers generate
JobCallevents, which contain aJobIdand data. - The Runner receives these calls and passes them to the Router.
- The Router matches the
JobIdto the appropriate job handler. - The Job Handler processes the call and returns a
JobResult. - The Runner collects the results and distributes them to all registered Consumers.
The BlueprintRunner orchestrates the job execution flow, connecting producers, the router, job handlers, and consumers.
implements
«trait»
FromJobCall
+type Rejection
+from_job_call(JobCall, Ctx) : Result<Self, Rejection>
«trait»
FromJobCallParts
+type Rejection
+from_job_call_parts(Parts, Ctx) : Result<Self, Rejection>
Context
+S 0
+from_job_call_parts(Parts, Ctx) : Result<Self, Infallible>
Metadata
+MetadataMap 0
+from_job_call_parts(Parts, Ctx) : Result<Self, Infallible>Extractors allow job handlers to:
- Extract specific data from job calls.
- Access shared context needed for processing.
- Handle metadata attached to job calls.
The job system allows returning various types from jobs, which are converted to JobResult via the IntoJobResult trait.
Final Result
Conversion Process
Return Types
String
Bytes/BytesMut
Void
()
Result<T, E>
Custom Type
IntoJobResult Trait
JobResult
None (No Result)
The BlueprintRunner uses a builder pattern to configure all necessary components for job execution:
- Blueprint Configuration - Protocol-specific registration and requirements.
- Producers - Generate job calls from events (e.g., blockchain events).
- Router - Directs job calls to appropriate handlers.
- Consumers - Receive and process job results.
- Background Services - Additional services that need to run alongside jobs.
+builder(config, env) : BlueprintRunnerBuilder
BlueprintRunnerBuilder
-config: DynBlueprintConfig
-env: BlueprintEnvironment
-producers: Vec<Producer>
-consumers: Vec<Consumer>
-router: Option<Router>
-background_services: Vec<DynBackgroundService>
-shutdown_handler: F
+router(Router) : Self
+producer(Stream) : Self
+consumer(Sink) : Self
+background_service(BackgroundService) : Self
+with_shutdown_handler(F) : Self
+run() ResultFinalizedBlueprintRunner
-config: DynBlueprintConfig
-producers: Vec<Producer>
-consumers: Vec<Consumer>
-router: Router
-env: BlueprintEnvironment
-background_services: Vec<DynBackgroundService>
-shutdown_handler: F
-run() ResultThe Job trait is the core abstraction for implementing job handlers. It is automatically implemented for async functions that take appropriate parameters and return types convertible to JobResult.
-
RunnerError
- NoRouter
- NoProducers
- Keystore (KeystoreError)
- Networking (NetworkingError)
- Io (IoError)
- Config (ConfigError)
- BackgroundService (String)
- JobCall (JobCallError)
- Producer (ProducerError)
- Consumer (BoxError)
- Tangle (TangleError)
- Eigenlayer (EigenlayerError)
- Other (BoxError)
-
JobCallError
- JobFailed (BoxError)
- JobDidntFinish (JoinError)
-
ProducerError
- StreamEnded
- Failed (BoxError)
Sources: [crates/runner/src/error.rs]
When using the job system, follow these guidelines:
- JobId Selection:
- Use simple numeric IDs for basic applications.
- Use string-based IDs or domain-specific types for more complex applications.
- Consider using a hash-based ID for protocol-specific identifiers.
- Return Types:
- Return
Result<T, E>to properly handle errors. - Use
Voidwhen a job should not produce a result. - Return structured data for complex responses.
- Return
- Context Usage:
- Use
Contextto share state between jobs. - Organize context into smaller components using
FromRef. - Consider thread-safety with shared mutable context.
- Use
- Error Handling:
- Implement custom error types when needed.
- Use the
?operator withResultreturns for clean error propagation. - Provide meaningful error messages for debugging.
- Ask Devin about: tangle-network/blueprint
- Deep Research:
- Syntax error: textmermaid version 11.6.0
No relevant content available.
To learn more about the Tangle Network's Blueprint Framework, consult with Devin regarding the specifics of the tangle-network/blueprint.
The Router component is a central part of the Blueprint framework that handles dispatching jobs to the appropriate handlers based on job IDs. It serves as the routing mechanism between job calls and their implementations, allowing for flexible composition of jobs and services with middleware support.
- Job Dispatching: Routes jobs to the correct handlers.
- Middleware Support: Allows for the integration of middleware in job processing.
- Flexible Composition: Facilitates the combination of jobs and services.
For more detailed information about the job system, refer to the Job System.
The Router takes incoming JobCall objects, matches them against registered handlers based on their job ID, and returns the results. It supports three types of routing mechanisms:
- Specific routes: Matched to exact job IDs
- Always routes: Executed for every job call regardless of ID
- Fallback routes: Used when no specific route matches
The Router is built around an internal architecture that efficiently maps job IDs to their handlers.
Router System
route()
always()
fallback()
match job_id
execute
JobResult
Option<Vec<JobResult>>
JobCall
Router<Ctx>
RouterInner<Ctx>
JobIdRouter<Ctx>
Specific Routes Map
Always Routes
Fallback Route
The core components are:
Router<Ctx>: The main entry point that wrapsRouterInnerand provides the public APIRouterInner: Contains theJobIdRouterand manages routing stateJobIdRouter: Performs the actual routing based on job IDsRoute: Represents a job handler that can process job calls and return results
When a job call is received, the Router processes it through a specific sequence to determine which handlers should execute.
"JobIdRouter" "Router" "Client"
alt[Fallback route exists][No fallback route]
alt[Job ID matches a specific route][No specific route matched]
JobCall call_with_context
Execute specific route
Execute always routes
All job results
Check for fallback route
Execute fallback route
Fallback job result
None (no routes matched)
Option<Vec<JobResult>>
- Specific routes are matched exactly by job ID.
- Always routes are executed regardless of specific route matches.
- The fallback route only runs when no specific route matches.
- Results from all executed routes are collected and returned.
The Router integrates with the tower middleware ecosystem by implementing the Service trait, allowing it to be composed with other services and middleware.
Service Integration
implements
route()
route_service()
layer()
call()
Option<>
Router
Service trait
Job Handler
Other Service
Middleware
JobCall
Client
The Router provides a builder-style API that allows for fluent configuration:
let router = Router::new();// Add a route for a specific job ID
// Add a handler that runs for every job call
// Add a handler that runs when no specific route matchesThe Router can be enhanced with middleware layers that wrap around routes:
router = router.layer(middleware_layer);Routes can access shared context data:
router = router.with_context(context);The routing mechanism follows a specific pattern to determine which handlers to execute:
## Integrating Middleware with the Router
### Integration Benefits
- **Using the Router with existing tower middleware**
- **Applying common patterns:**
- Rate limiting
- Timeouts
- Retries
- **Composing the Router with other services**
### Sources
- [Routing Implementation](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/router/src/routing.rs#L285-L323)
- [Future Implementation](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/router/src/future.rs#L77-L94)
## Practical Examples of Router Usage
## Example Usage
Here's a conceptual example of how the Router might be used in a Blueprint application:
```rust
// Create a new router
let router = Router::new()
// Add a route for calculating squares
.route(CALCULATE_SQUARE_JOB_ID, |call: JobCall| async move {
// Parse the input number
let number: u64 = /* extract from call */;
// Return the square
})
// Add a route for calculating cubes
.route(CALCULATE_CUBE_JOB_ID, |call: JobCall| async move {
// Parse the input number
let number: u64 = /* extract from call */;
// Return the cube
})
// Add a logging handler that runs for every job
.always(|call: JobCall| async move {
println!("Processing job: {}", call.job_id());
// Return nothing (None) as this is just for logging
})
// Add a fallback for unknown job IDs
.fallback(|call: JobCall| async move {
println!("Unknown job ID: {}", call.job_id());
// Return an error message
format!("Unknown job ID: {}", call.job_id())
});
The Router is a versatile component of the Blueprint framework that enables flexible job routing and execution. It supports multiple routing strategies and middleware integration, providing a powerful foundation for building complex job processing systems while maintaining clean separation of concerns. The Router integrates closely with the Blueprint Runner to provide the execution environment for jobs, and with the wider Blueprint framework through the Job trait and Service implementations.
Sources: Routing Code Future Code
No significant content available.
The Keystore component of the Blueprint framework provides a secure and flexible system for managing cryptographic keys and signing operations. It handles key generation, storage, retrieval, and signing capabilities across various key types with support for multiple storage backends.
The Blueprint Keystore provides a unified interface for cryptographic key management operations while supporting pluggable storage backends. It's designed to securely store private keys while making them available for signing operations when needed.
- InMemoryStorage
- FileStorage
- SubstrateStorage
- ECDSA (K256)
- Ed25519
- SR25519
- BLS (377/381)
- BN254
generate()sign_with_local()list_local()insert()remove()
The Keystore supports various cryptographic algorithms through a unified KeyType interface.
| Key Type | Algorithm | Feature Flag | Common Use |
|---|---|---|---|
| SR25519 | Schnorrkel | sr25519-schnorrkel or tangle |
Substrate account keys |
| ED25519 | Ed25519 (Zebra) | zebra or tangle |
General purpose signatures |
| ECDSA | secp256k1 | ecdsa or tangle |
EVM compatibility |
| BLS381 | BLS on BLS12-381 | bls or tangle |
Aggregate signatures |
| BLS377 | BLS on BLS12-377 | bls or tangle |
Aggregate signatures |
| BN254 | BN254 curve | bn254 |
Zero-knowledge proofs |
The KeyType trait abstracts over these different cryptographic implementations, providing common operations like signing, verification, and key generation.
When multiple storage backends are configured, the Keystore tries them in priority order until it finds the key.
// Store keys with higher priority (255) for more important backends
let entry = LocalStorageEntry { storage, priority: 255 };
backends.push(entry);
backends.sort_by_key(|e| cmp::Reverse(e.priority));The Keystore uses Rust's feature flags to enable only what's needed:
std: Enables standard library features like file system storagesubstrate-keystore: Enables Substrate keystore integration- Key type features:
sr25519-schnorrkel,ecdsa,zebra,bls,bn254,sp-core - Remote signer features:
aws-signer,gcp-signer,ledger-browser,ledger-node
The Keystore is designed with security in mind, including:
- Key Isolation: Private keys never leave the storage backend unless explicitly exported.
- Feature-Based Security: Remote signing options can be used to keep keys in hardware devices or cloud HSMs.
- Multiple Backend Support: Critical keys can be stored in more secure backends while less sensitive keys use more convenient storage.
The Keystore provides a consistent interface for key storage mechanisms.
The Keystore supports multiple storage backends through a unified interface. Each backend implements the RawStorage trait.
- TypedStorage Wrapper
- Storage Implementations
InMemoryStorageFileStorageSubstrateStorage
store()load()remove()contains()list()
| Backend | Purpose | Feature Flag | Persistence |
|---|---|---|---|
| InMemoryStorage | Fast in-memory storage | Always available | None (volatile) |
| FileStorage | File-based persistent storage | std |
File system |
| SubstrateStorage | Integration with Substrate keystore | substrate-keystore |
Depends on Substrate config |
The Keystore is configured through the KeystoreConfig struct:
KeystoreConfig {
in_memory(bool),
fs_root(path),
substrate(keystore),
remote(config),
}in_memory(true): Enable in-memory storage (default if no other storage is specified)fs_root(path): Enable file-based storage with the specified root directorysubstrate(keystore): Enable Substrate keystore integrationremote(config): Enable remote keystores like AWS KMS, GCP KMS, or Ledger hardware wallets
Keystore::new(config);-
Generate a new SR25519 key
cargo tangle key generate --key-type sr25519 --output ./my-key.json
-
Import an existing ECDSA key
cargo tangle key import --key-type ecdsa --secret "0x1234..." --keystore-path ./keystore -
List all keys in the keystore
cargo tangle key list --keystore-path ./keystore
-
Export a private key
cargo tangle key export --key-type sr25519 --public "0xabcd..." --keystore-path ./keystore
-
Generate a mnemonic phrase
cargo tangle key generate-mnemonic --word-count 24
// In-memory keystore (default)
let keystore = Keystore::new(KeystoreConfig::new())?;
// File-based keystore
let keystore = Keystore::new(KeystoreConfig::new().fs_root("./keystore"))?;
// Substrate keystore
let substrate_keystore = sc_keystore::LocalKeystore::open(path, None)?;
let keystore = Keystore::new(KeystoreConfig::new().substrate(Arc::new(substrate_keystore)))?;// Generate a new key
let public_key = keystore.generate::<SpSr25519>(None)?;
// Generate with a deterministic seed
let seed = b"my deterministic seed";
let public_key = keystore.generate::<K256Ecdsa>(Some(seed))?;
// Sign a message
let msg = b"message to sign";
let signature = keystore.sign_with_local::<SpSr25519>(&public_key, msg)?;
// List all keys of a specific type
let keys = keystore.list_local::<SpSr25519>()?;
// Check if a key exists
let exists = keystore.contains_local::<SpSr25519>(&public_key)?;
// Remove a keyThe Blueprint framework provides CLI commands for key management through the cargo-tangle command. These commands allow users to generate, import, export, and list keys.
cargo tangle key generate --key-type sr25519
cargo tangle key import --key-type ecdsa --keystore-path ./keystore
cargo tangle key list --keystore-path ./keystore
cargo tangle key export --key-type sr25519 --public '0x...'
- generate
- import
- export
- list
- generate-mnemonic
The Keystore provides a comprehensive error system through the Error enum, which covers:
- Storage-related errors:
StorageNotSupported,Io - Key operations:
KeyNotFound,KeyTypeNotSupported - Cryptographic failures:
SignatureFailed,Crypto - Backend-specific errors:
SpKeystoreError,AwsSigner, etc.
All Keystore operations return a Result<T, Error> allowing for proper error handling and propagation.
match keystore.sign_with_local::<SpSr25519>(&public_key, msg) {
Ok(signature) => {
// Use the signature
}
Err(Error::KeyNotFound) => {
// Handle missing key
}
Err(Error::KeyTypeNotSupported) => {
// Handle unsupported key type
}
Err(e) => {
// Handle other errors
println!("Error signing message: {}", e);
}
}When using the Keystore in production, consider:
- Using hardware-backed keystores where possible
- Setting appropriate filesystem permissions for file-based keystores
- Using secure memory handling techniques when working with private keys in memory
No significant content available.
- Overview
- Architecture Overview
- Key Concepts
- Protocol Support
- Getting Started
- Installation
- Creating Your First Blueprint
- Example Blueprints
- Blueprint SDK
- Core Components
- Job System
- Router
- Networking
- Keystore
- Blueprint Runner
- Runner Configuration
- Job Execution Flow
- Blueprint Manager
- Event Handling
- Blueprint Sources
- CLI Reference
- Blueprint Commands
- Key Management Commands
- Deployment Options
- Development
- Build Environment
- Testing Framework
- CI/CD
- Advanced Topics
- Networking Extensions
- Macro System
- Custom Protocol Integration
The Blueprint Manager is a critical component of the Tangle Blueprint framework that handles the lifecycle of decentralized applications (dApps) running across blockchain networks. It monitors blockchain events, dynamically fetches application binaries from various sources, and manages their execution based on on-chain registrations and instructions.
The Blueprint Manager acts as an orchestration layer, connecting on-chain events to off-chain service execution. It listens for events from the Tangle Network, fetches the appropriate blueprint binaries based on on-chain specifications, and manages their execution lifecycle.
External Components
Blueprint Manager Architecture
Triggers
Executes
Blueprint Manager
Event Handler
Manager Config
Active Gadgets
Source Handler
GitHub Source
Container Source
Test Source
Tangle Client
Verified Blueprint
Process Handles
Tangle Network
Blueprint Events
Blueprint Binaries
Blueprint Runner
- tangle.rs
- Cargo.toml (Eigenlayer)
- Cargo.toml (EVM)
- Cargo.toml (Tangle)
- Cargo.toml (Contexts)
- Cargo.toml (Manager)
- config.rs
- event_handler.rs
- mod.rs
- main.rs
- utils.rs
- binary.rs
- container.rs
- github.rs
- mod.rs (sources)
- testing.rs
- Cargo.toml (SDK)
- Cargo.toml (Anvil)
- Cargo.toml (Core)
- Cargo.toml (Eigenlayer)
- Cargo.toml (Tangle)
The VerifiedBlueprint struct encapsulates a blueprint that has been validated against on-chain data and is ready for execution. It contains:
- A list of potential source fetchers for the blueprint
- The blueprint's metadata and configuration
- Methods to start services for the blueprint
The Blueprint Manager operates on an event-driven model, responding to events from the Tangle blockchain to manage blueprint services.
"Blueprint Runner" "Source Handler" "Blueprint Manager" "Tangle Network"
alt[New Blueprint Registration][Unregistered Service][Service Status Changed]
Finality Notification
Poll for Blueprint Events
Query Blueprint Details
Return Blueprint Data
Fetch Blueprint Binary
Binary Ready
Spawn New Service
Stop Service
Restart/Update Service
Update Active Gadgets
- PreRegistration: New blueprint registration that needs initialization.
- Registered: Confirms a blueprint has been registered.
- Unregistered: Indicates a blueprint has been unregistered and should be stopped.
- ServiceInitiated: Indicates a service has been started on-chain.
- JobCalled: Indicates a job has been called on a service.
- JobResultSubmitted: Indicates a job result has been submitted.
The Blueprint Manager supports fetching blueprints from multiple sources, enabling flexibility in deployment and development workflows.
- BlueprintSourceHandler Interface
- Fetchers:
- GitHub Source Fetcher
- Container Source Fetcher
- Test Source Fetcher
fetch()spawn()
- GitHub Release Binary
- Container Image
- Local Cargo Build
- GitHub Binary Process
- Docker Container
- Test Binary Process
- ProcessHandle
// Example of fetch method usage
fetch();
// Example of spawn method usage
spawn();
-
GitHub Source (1-126):
- Downloads binary releases from GitHub repositories
- Verifies binary checksums for security
- Manages binary execution as native processes
-
Container Source (1-249):
- Pulls container images from Docker/Podman registries
- Manages container lifecycle via Docker API
- Handles networking adaptations for containerized blueprints
-
Test Source (1-146):
- Builds binaries from local source code
- Used primarily for testing and development
- Compiles using cargo within the repository
Each source handler implements the BlueprintSourceHandler trait, which provides a common interface for fetching and spawning blueprints.
When a blueprint event is detected, the Blueprint Manager verifies the blueprint and prepares it for execution:
Blueprint Verification and Execution Flow
Running
Error
Finished
On-chain Unregistration
Process Death
Event Detection
Verify Blueprint
Select Source Fetcher
Fetch Binary/Container
Prepare Arguments & Environment
Spawn Process/Container
Track Process Status
Monitor Service
Handle Error
Clean Up Resources
Stop Service
Restart Service
The Blueprint Manager is configured via the BlueprintManagerConfig struct, which provides options for:
| Configuration | Description | Default |
|---|---|---|
keystore_uri |
Path to the keystore for blockchain identity | Required |
data_dir |
Directory for blueprint data storage | ./data |
verbose |
Verbosity level for logging | 0 |
pretty |
Enable pretty logging format | false |
instance_id |
Unique identifier for the manager instance | None |
test_mode |
Enable test mode for development | false |
preferred_source |
Preferred source type (Container, Native, Wasm) | Native |
podman_host |
URL for the Podman/Docker socket | unix:///var/run/docker.sock |
The manager also detects available source types on the system through the SourceCandidates struct, which checks for:
- Container runtime availability (Docker/Podman)
- Wasm runtime availability
- System capabilities for native binary execution
The Blueprint Manager can be integrated into applications using its SDK interface:
"Tangle Runtime" "Blueprint Manager" "Blueprint SDK" "Application"
opt[Shutdown]
Configure BlueprintManagerConfig
Configure BlueprintEnvironment
run_blueprint_manager()
Create BlueprintManagerHandle
start()
Connect & Subscribe to Events
Blockchain Events
Process Events & Manage Blueprints
await handle
shutdown()
Terminate All Blueprints
The BlueprintManagerHandle provides methods to:
- Start the manager with
start() - Access keypair information with
sr25519_id()andecdsa_id() - Shut down the manager with
shutdown() - Wait for completion by awaiting the handle
The Blueprint Manager is available as a CLI tool, used by the cargo-tangle command:
cargo-tangle run blueprint- Specifying RPC endpoints
- Configuring keystore paths
- Setting data directories
- Selecting blueprint IDs
start()awaitrun_blueprint_manager()BlueprintManagerHandle
The BlueprintManagerHandle is a critical abstraction that provides control over the running Blueprint Manager:
implements
BlueprintManagerHandle
-span: tracing::Span
-sr25519_id: TanglePairSigner<sr25519::Pair>
-ecdsa_id: TanglePairSigner<ecdsa::Pair>
-keystore_uri: String
-shutdown_call: Option<oneshot::Sender>() : <>
-start_tx: Option<oneshot::Sender>() : <>
-running_task: JoinHandle<Result>() : <>
+start() Result~() : ~
+sr25519_id() : &TanglePairSigner<sr25519::Pair>
+ecdsa_id() : &TanglePairSigner<ecdsa::Pair>
+shutdown() Result~() : ~
+keystore_uri() : &str
+span() : &tracing::Span
«trait»
Future
+poll() Poll<Result>() : <>The handle implements the Future trait, allowing it to be awaited in async contexts. When dropped, it automatically starts the Blueprint Manager if it hasn't been started yet, ensuring that resources are properly initialized.
The Blueprint Manager is a central component in the Tangle Blueprint framework that bridges on-chain events to off-chain execution. It provides a flexible system for fetching, verifying, and executing blueprints from various sources, enabling dynamic deployment and management of decentralized applications. Its event-driven architecture allows it to respond to blockchain events in real-time, ensuring that blueprint services are properly synchronized with their on-chain representations. Through its configuration options and source handler system, it supports diverse deployment scenarios, from development and testing to production environments.
No relevant content available.
To learn more about the Blueprint Framework, consider reaching out to Devin regarding the tangle-network/blueprint for deeper insights.
The Blueprint networking system consists of several interconnected components that work together to provide a secure and flexible peer-to-peer communication layer.
- Peer-to-Peer Communication: Built on libp2p, enabling secure connections and peer verification.
- Message Exchange: Supports both direct communication and gossip protocols.
For more details on networking extensions, refer to Advanced Topics - Networking Extensions.
// Example of establishing a peer connection
const peer = await libp2p.peerDiscovery.findPeer(peerId);
await libp2p.dial(peer);
Applications interact with the network through a NetworkServiceHandle, which provides a simplified interface to the underlying network service:
NetworkServiceHandle<K: KeyType>
+PeerId local_peer_id
+Arc<String> blueprint_protocol_name
+K::Secret local_signing_key
+NetworkSender<K> sender
+NetworkReceiver receiver
+Arc<PeerManager<K>> peer_manager
+Option<VerificationIdentifierKey<K>> local_verification_key
+send(routing: MessageRouting, message: Vec<u8>)
+peers() : -> Vec<PeerId>
+next_protocol_message() : -> Option<ProtocolMessage>
+get_participant_id() : -> Option<usize>
+split() -> (NetworkSender<K>, NetworkReceiver)
The handle provides methods for:
- Sending messages (point-to-point or gossip)
- Querying connected peers
- Receiving incoming messages
- Retrieving participant IDs for consensus protocols
| Parameter | Description |
|---|---|
network_name |
Name/namespace for the network |
instance_id |
Unique identifier for this blueprint instance |
instance_key_pair |
Secret key for signing protocol messages |
local_key |
libp2p keypair for peer identification |
listen_addr |
Network address to listen on |
target_peer_count |
Target number of peers to maintain |
bootstrap_peers |
Initial peers to connect to |
enable_mdns |
Enable multicast DNS discovery |
enable_kademlia |
Enable Kademlia DHT for peer discovery |
using_evm_address_for_handshake_verification |
Whether to use EVM addresses for peer verification |
The NetworkService is the core component of the networking system. It initializes and manages the libp2p swarm and coordinates all network activities.
The NetworkService is configured through the NetworkConfig struct:
NetworkConfig<K: KeyType>
+String network_name
+String instance_id
+K::Secret instance_key_pair
+Keypair local_key
+Multiaddr listen_addr
+u32 target_peer_count
+Vec<Multiaddr> bootstrap_peers
+bool enable_mdns
+bool enable_kademlia
+bool using_evm_address_for_handshake_verification
NetworkService<K: KeyType>
+Swarm<GadgetBehaviour<K>> swarm
+K::Secret local_signing_key
+Arc<PeerManager<K>> peer_manager
+new(config: NetworkConfig<K>, allowed_keys: AllowedKeys<K>, allowed_keys_rx: Receiver<AllowedKeys<K>>) : -> Result<Self, Error>
+start() : -> NetworkServiceHandle<K>
+run()The PeerManager is responsible for tracking peer states, managing verification, and controlling which peers are allowed to connect.
"Node B" "Node A"
Verify signature against whitelist
Both peers are now verified
PeerManager.is_peer_verified() = true
Connection Establishment
Handshake Request (with signed challenge)
Handshake Response
Protocol Messages (now allowed)
The PeerManager maintains a whitelist of allowed keys, which can be either:
- EVM Addresses: For Ethereum-compatible verification
- Instance Public Keys: For blockchain-specific key types
«enumeration»
AllowedKeys<K: KeyType>
EvmAddresses(HashSet<Address>)
InstancePublicKeys(HashSet<K::Public>)
«enumeration»
VerificationIdentifierKey<K: KeyType>
EvmAddress(Address)
InstancePublicKey(K::Public)
+verify(msg: &[u8], signature: &[u8]) -> Result<bool, Error>
+to_bytes() -> Vec<u8>
PeerManager<K: KeyType>
+DashMap<PeerId, PeerInfo> peers
+DashSet<PeerId> verified_peers
+DashMap<VerificationIdentifierKey> verification_id_keys_to_peer_ids
+DashMap<PeerId> banned_peers
+Arc<RwLock<Vec<VerificationIdentifierKey<K>>>> whitelisted_keys
+clear_whitelisted_keys()
+insert_whitelisted_keys(keys: AllowedKeys<K>)
+is_key_whitelisted(key: &VerificationIdentifierKey<K>) -> bool
+verify_peer(peer_id: &PeerId)
+is_peer_verified(peer_id: &PeerId) -> bool
+ban_peer(peer_id: PeerId, reason: String, duration: Option<Duration>)The networking layer supports two primary communication methods:
- Direct Request/Response: For targeted messages to specific peers.
- Gossip: For broadcast messages to all peers subscribed to a topic.
Gossip Communication
Publish
Message
Message
Message
Publisher Node
Blueprint Topic
Subscriber 1
Subscriber 2
Subscriber N
Direct Communication
Request
Response
Node A
Node B
- Cargo.toml (core)
- Cargo.toml (crypto)
- Cargo.toml (evm-extra)
- lib.rs (macros)
- Cargo.toml (networking)
- Cargo.toml (agg-sig-gossip)
- Cargo.toml (round-based)
- rand_protocol.rs (tests)
- behaviour.rs
- peers.rs (discovery)
- service.rs
- service_handle.rs
- mod.rs (test_utils)
- mod.rs (tests)
- Cargo.toml (producers-extra)
- Cargo.toml (router)
- Cargo.toml (local-database)
- Cargo.toml (testing-utils)
The Blueprint protocol defines several message types:
| Message Type | Purpose |
|---|---|
InstanceMessageRequest |
Direct request to a peer |
InstanceMessageResponse |
Response to a direct request |
HandshakeMessage |
Used for peer verification |
ProtocolMessage |
Generic protocol message with routing info |
The aggregated signature gossip extension (blueprint-networking-agg-sig-gossip-extension) optimizes consensus by aggregating signatures to reduce network traffic:
blueprint-networking
blueprint-networking-agg-sig-gossip-extension
blueprint-core
blueprint-crypto
Signature Aggregator
Aggregated Gossip Handler
Signature Verification
Signature Combining
Topic Management
Message Propagation
The round-based extension (blueprint-networking-round-based-extension) provides support for round-based protocols, such as distributed key generation or Byzantine agreement:
blueprint-networking
blueprint-networking-round-based-extension
blueprint-core
blueprint-crypto
round-based
Round-Based Network Adapter
MPC Protocol Integration
Round Message Routing
Message Delivery Guarantees
MPC Party Handling
Protocol State Management
The Blueprint SDK includes comprehensive testing utilities for the networking components, allowing developers to simulate network conditions and test protocols in isolation:
TestNode<K: KeyType>
+Option<NetworkService<K>> service
+PeerId peer_id
+Option<Multiaddr> listen_addr
+K::Secret instance_key_pair
+Keypair local_key
+bool using_evm_address_for_handshake_verification
+new(network_name: &str, instance_id: &str, allowed_keys: AllowedKeys<K>, bootstrap_peers: Vec<Multiaddr>, using_evm_address_for_handshake_verification: bool) : -> Self
+start() : -> Result<NetworkServiceHandle>create_whitelisted_nodes<K: KeyType>(count: usize, network_name: &str, instance_name: &str, using_evm_address_for_handshake_verification: bool) : -> Vec<TestNode<K>>wait_for_peer_discovery<K: KeyType>(handles: &[NetworkServiceHandle<K>], timeout: Duration) -> Resultwait_for_all_handshakes<K: KeyType>(handles: &[&mut NetworkServiceHandle<K>], timeout_length: Duration)
These utilities make it easy to:
- Create test networks with multiple nodes
- Simulate peer discovery and handshakes
- Test protocol behavior under various conditions
- Verify consensus properties
Here's a basic example of how to set up the networking layer in a Blueprint application:
- Create a network configuration
- Initialize the NetworkService with appropriate key types
- Start the service to get a handle
- Use the handle to send and receive messages
Applications interact with the networking layer through the Job system and Router components of the Blueprint SDK.
The networking layer is designed to be extensible, allowing for:
- Additional protocol integrations
- Performance optimizations for large-scale networks
- Enhanced security features
- More sophisticated peer discovery mechanisms
- Documentation Links:
- Overview
- Architecture Overview
- Key Concepts
- Protocol Support
- Getting Started
- Installation
- Creating Your First Blueprint
- Example Blueprints
- Blueprint SDK
- Core Components
- Job System
- Router
- Networking
- Keystore
- Blueprint Runner
- Runner Configuration
- Job Execution Flow
- Blueprint Manager
- Event Handling
- Blueprint Sources
- CLI Reference
- Blueprint Commands
- Key Management Commands
- Deployment Options
- Development
- Build Environment
- Testing Framework
- CI/CD
- Advanced Topics
- Networking Extensions
- Macro System
- Custom Protocol Integration
The Blueprint Runner orchestrates the job execution flow, connecting producers, the router, and consumers.
The execution of a job involves several steps:
- Job Call Creation: Producers create
JobCallinstances with a specificJobIdand a body containing the necessary data for job execution.
ConsumerJobHandlerRouterBlueprintRunnerProducer
Create JobCall
Route JobCall
Execute Job
Return JobResult
Process Result
Broadcast JobResult
- JobId: A unique identifier used by the Router to determine the appropriate handler.
- Metadata: Additional information for handlers or middleware.
- Extensions: Arbitrary data attached to a job call.
For further details, refer to:
Relevant source files:
- context.rs
- metadata.rs
- call.rs
- id.rs
- mod.rs
- into_job_result.rs
- result/mod.rs
- lib.rs
- util.rs
- Cargo.toml (runner)
- config.rs (runner)
- bls.rs (eigenlayer)
- config.rs (eigenlayer)
- ecdsa.rs (eigenlayer)
- error.rs (eigenlayer)
- error.rs (runner)
- lib.rs (runner)
- config.rs (tangle)
- error.rs (tangle)
- Cargo.toml (tangle-extra)
- Cargo.lock (incredible-squaring)
- Cargo.toml (incredible-squaring)
- Integration with Blockchain Protocols: Blueprint allows operators to register before participating in job execution, facilitating compatibility with various blockchain protocols.
- Asynchronous Task Processing: The job execution flow is designed for processing tasks across different blockchain environments, enabling effective job design, implementation, and debugging.
- Sources:
After a job is executed, its return value is converted to a JobResult using the IntoJobResult trait. This standardizes how job results are represented, regardless of the job's return type.
contains
JobResult
+Parts head
+T body
+metadata() : MetadataMap
+body() : Result<T, E>
+into_parts() ResultThe BlueprintRunner broadcasts the job results to all registered consumers, completing the job execution cycle.
The builder pattern allows for flexible configuration of the runner with producers, consumers, background services, and a router.
A producer is a stream that generates job calls. It can derive job calls from blockchain events, timers, or any other source.
type Producer = Arc<Mutex<Box<dyn Stream<Item = Result<JobCall, BoxError>> + Send + Unpin + 'static>>>;A consumer is a sink that receives job results. It can store results, relay them to other systems, or perform actions based on them.
type Consumer = Arc<Mutex<Box<dyn Sink<JobResult, Error = BoxError> + Send + Unpin + 'static>>>;The router matches job calls to the appropriate handlers based on the job ID and can apply middleware to all jobs or specific routes.
A job handler is an async function that processes a job call. It can extract arguments from the call, perform business logic, and return a result.
- Job Trait Implementation
- JobCall
- Job::call()
- ExtractArguments()
- ExecuteJobFunction()
- ConvertToJobResult()
A JobCall consists of two main parts: the header (Parts) and the body.
JobCall
+ Parts head
+ T body
+ job_id() : JobId
+ metadata() : MetadataMap
+ extensions() : Extensions
+ into_parts() (Parts, T)
Parts
+ JobId job_id
+ MetadataMap metadata
+ Extensions extensions
- Creates JobCall
- Routes JobCall
- Executes Job
- Returns JobResult
- Processes Result
- Broadcasts JobResult
- Producer
- Blueprint Runner
- Router
- Job Handler
- Consumer
The BlueprintRunner receives job calls from producers and passes them to the Router, which determines the appropriate handler based on the JobId.
Router Mechanism
Route Match
No Match
Always Route
JobCall (with JobId)
Router
Job Handler
Fallback Handler
Always Handler
When a job is executed, the framework first extracts arguments from the job call using the FromJobCall and FromJobCallParts traits. These extractors can access parts of the job call or the entire call.
JobCall
Split into Parts + Body
Extract Arguments via FromJobCall/FromJobCallParts
Execute Job Function
Convert Return Value via IntoJobResult
JobResult
Error handling is an integral part of the job execution flow. If a job returns an error, it is wrapped in a JobResult::Err and propagated to consumers.
Job Return Value
JobResult::Ok
JobResult::Err
In addition to job execution, the BlueprintRunner can also manage background services that run alongside job processing.
Background Services Management
start()
oneshot::Receiver
Trigger Shutdown
Continue Running
The FinalizedBlueprintRunner is the core component that manages the job execution flow. It is created and configured through the BlueprintRunnerBuilder.
BlueprintRunnerBuilder
+Box<DynBlueprintConfig> config
+BlueprintEnvironment env
+Vec<Producer> producers
+Vec<Consumer> consumers
+Option<Router> router
+Vec<Box<DynBackgroundService>> background_services
+Future shutdown_handler
+router(Router) : Self
+producer(Stream) : Self
+consumer(Sink) : Self
+background_service(BackgroundService) : Self
+run() ResultFinalizedBlueprintRunner
+Box<DynBlueprintConfig> config
+Vec<Producer> producers
+Vec<Consumer> consumers
+Router router
+BlueprintEnvironment env
+Vec<Box<DynBackgroundService>> background_services
+Future shutdown_handler
+run() ResultBackground services are started when the runner is launched and are monitored throughout the runner's lifecycle. If a background service fails, the runner will trigger a shutdown.
Before job execution begins, the BlueprintRunner checks if protocol-specific registration is required and performs it if necessary.
Yes
No
should_exit_after_registration() == true
should_exit_after_registration() == false
Start Runner
Check requires_registration()
Register with Protocol
Skip Registration
Exit
Continue to Job Execution
No relevant content available.
- Overview: Overview
- Architecture Overview: Architecture Overview
- Key Concepts: Key Concepts
- Protocol Support: Protocol Support
- Installation: Installation
- Creating Your First Blueprint: Creating Your First Blueprint
- Example Blueprints: Example Blueprints
- Blueprint SDK: Blueprint SDK
- Job System: Job System
- Router: Router
- Networking: Networking
- Keystore: Keystore
- Runner Configuration: Runner Configuration
- Job Execution Flow: Job Execution Flow
- Event Handling: Event Handling
- Blueprint Sources: Blueprint Sources
- Blueprint Commands: Blueprint Commands
- Key Management Commands: Key Management Commands
- Deployment Options: Deployment Options
- Build Environment: Build Environment
- Testing Framework: Testing Framework
- CI/CD: CI/CD
- Networking Extensions: Networking Extensions
- Macro System: Macro System
- Custom Protocol Integration: Custom Protocol Integration
No significant content to extract.
The Blueprint Runner orchestrates job execution, lifecycle management, and protocol-specific operations.
- Router: Maps job IDs to handler functions. Required for operation.
- Producers: At least one producer is needed to supply job calls.
- Consumers: Handle outputs from processed jobs.
- Background Services: Perform auxiliary tasks.
- Shutdown Handler: Custom logic for shutdown.
let result = BlueprintRunner::builder(config, blueprint_env)
// Required: Add a Router mapping job IDs to handlers
.route(0, async || "Hello, world!")
.route(1, handle_complex_job)
// Required: Add at least one producer
// Optional: Add result consumers
// Optional: Add background services
// Optional: Specify shutdown logic
.with_shutdown_handler(async { println!("Shutting down!") })
.run()
.await;The BlueprintEnvironment struct contains fundamental configuration settings needed by all blueprints, including protocol-specific information, network endpoints, and key management.
struct BlueprintEnvironment {
http_rpc_endpoint: String,
ws_rpc_endpoint: String,
keystore_uri: String,
data_dir: Option<PathBuf>,
protocol_settings: ProtocolSettings,
test_mode: bool,
}
enum ProtocolSettings {
None,
Tangle(TangleProtocolSettings),
Eigenlayer(EigenlayerProtocolSettings),
}| Setting | Description | Default |
|---|---|---|
http_rpc_endpoint |
HTTP RPC endpoint for the blockchain | Required |
ws_rpc_endpoint |
WebSocket RPC endpoint for the blockchain | Required |
keystore_uri |
Path to the keystore directory | Required |
data_dir |
Directory for blueprint data | ./data (optional) |
test_mode |
Whether the blueprint is running in test mode | false |
protocol_settings |
Protocol-specific configuration | Depends on protocol |
// Loading from environment variables
let env = BlueprintEnvironment::load()?;// Constructed manually
let env = BlueprintEnvironment {
http_rpc_endpoint: "https://rpc.tangle.tools".to_string(),
ws_rpc_endpoint: "wss://rpc.tangle.tools".to_string(),
keystore_uri: "./keystore".to_string(),
data_dir: Some(PathBuf::from("./data")),
protocol_settings: ProtocolSettings::Tangle(TangleProtocolSettings {
blueprint_id: 1,
service_id: Some(2),
}),
test_mode: false,
};
// Tangle Protocol Settings
// Required settings for the Tangle protocol:
uses
TangleProtocolSettings
+u64 blueprint_id
+Option<u64> service_id
+protocol()
TangleConfig
+PriceTargets price_targets
+bool exit_after_register
+new(price_targets)
+with_exit_after_register(bool)
// Setting | Description | Required
// ---|---|---
// `blueprint_id` | The ID of the blueprint on Tangle Network | Yes
// `service_id` | The service ID for this blueprint instance | No (for registration)-
ECDSA-based configuration - Using
EigenlayerECDSAConfig- Requires:
earnings_receiver_addressdelegation_approver_address
- Constructor:
new(earnings_receiver, delegation_approver)
- Requires:
-
BLS-based configuration - Using
EigenlayerBLSConfig- Requires:
earnings_receiver_addressdelegation_approver_addressexit_after_register
- Constructor:
new(earnings_receiver, delegation_approver) with_exit_after_register(bool)
- Requires:
| Setting | Description | Default |
|---|---|---|
price_targets |
Resource pricing information | All zeros |
exit_after_register |
Whether to exit after registration | true |
- ECDSA Registration
- Check if the operator is registered with the stake registry.
- Register the operator with earnings receiver and delegation approver.
- Exit based on configuration.
Example for Eigenlayer Protocol
ALLOCATION_MANAGER_ADDRESS=0x8a791620dd6260079bf849dc5567adc3f2fdc318
REGISTRY_COORDINATOR_ADDRESS=0xcd8a1c3ba11cf5ecfa6267617243239504a98d90
OPERATOR_STATE_RETRIEVER_ADDRESS=0xb0d4afd8879ed9f52b28595d31b441d079b2ca07
DELEGATION_MANAGER_ADDRESS=0xcf7ed3acca5a467e9e704c703e8d87f634fb0fc9
SERVICE_MANAGER_ADDRESS=0x36c02da8a0983159322a80ffe9f24b1acff8b570
STAKE_REGISTRY_ADDRESS=0x4c5859f0f772848b2d91f1d83e2fe57935348029
STRATEGY_MANAGER_ADDRESS=0xa513e6e4b8f2a923d98304ec87f64353c4d5c853
AVS_DIRECTORY_ADDRESS=0x5fc8d32690cc91d4c39d9d3abcbd16989f875707
REWARDS_COORDINATOR_ADDRESS=0xb7f8bc63bbcad18155201308c8f3540b07f84f5e
PERMISSION_CONTROLLER_ADDRESS=0x3aa5ebb10dc797cac828524e59a333d0a371443c
STRATEGY_ADDRESS=0x524f04724632eed237cba3c37272e018b3a7967eBlueprints often need to register with their respective protocol networks before they can operate. The registration process is controlled by the BlueprintConfig trait implementation:
Runner starts
requires_registration()?
register()
Skip registration
should_exit_after_registration()?
Exit runner
Continue execution
- Checks if the operator is already registered for the specified blueprint ID
- Registers the operator with the Tangle Network if needed
- Can optionally exit after registration
- Checks if the operator is registered
- Registers the operator
- Deposits into the strategy
- Sets the allocation delay
- Stakes tokens to quorums
- Registers to operator sets
- Exits based on the
exit_after_registersetting (defaults totrue)
The Blueprint CLI provides commands to automate the runner configuration process, particularly the run command:
cargo tangle blueprint run --protocol <protocol> --rpc-url <url> [OPTIONS]
| Option | Description | Default |
|---|---|---|
--protocol, -p |
Protocol to use (tangle or eigenlayer) | Required |
--rpc-url, -u |
HTTP RPC endpoint URL | http://127.0.0.1:9944 |
--keystore-path, -k |
Path to the keystore | ./keystore |
--data-dir, -d |
Data directory path | ./data |
--settings-file, -f |
Path to protocol settings file | ./settings.env |
--network, -w |
Network to connect to (local, testnet, mainnet) | local |
--bootnodes, -n |
Optional bootnodes to connect to | None |
The CLI will prompt for required information if not provided and load protocol-specific settings from environment variables or a .env file.
Sources: [737-821](https://github.com/tangle-network/blueprint/blob/af8278cb/cli/src/main.rs#L737-L821), [80-96](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/runner/src/eigenlayer/config.rs#L80-L96)
Auto-refresh not enabled yet
Runner configuration errors are categorized into several types:
| Error Type | Description |
|---|---|
ConfigError |
Issues with configuration values |
KeystoreError |
Problems with the keystore |
NetworkingError |
Network connection issues |
NoRouterError |
Missing router configuration |
NoProducersError |
No job producers configured |
Protocol-specific errors |
Errors specific to Tangle or Eigenlayer |
Proper error handling in your configuration is essential for diagnosing issues:
match runner.run().await {
Ok(_) => println!("Runner completed successfully"),
Err(err) => match err {
RunnerError::Config(config_err) => {
eprintln!("Configuration error: {}", config_err);
}
RunnerError::NoRouter => {
eprintln!("Runner missing router configuration");
}
// Handle other error types
_ => eprintln!("Runner error: {}", err),
}
}- Ask Devin about tangle-network/blueprint
- Deep Research
No significant content available.
- Overview: Overview
- Architecture Overview: Architecture Overview
- Key Concepts: Key Concepts
- Protocol Support: Protocol Support
- Getting Started: Getting Started
- Installation: Installation
- Creating Your First Blueprint: Creating Your First Blueprint
- Example Blueprints: Example Blueprints
- Blueprint SDK: Blueprint SDK
- Core Components: Core Components
- Job System: Job System
- Router: Router
- Networking: Networking
- Keystore: Keystore
- Blueprint Runner: Blueprint Runner
- Runner Configuration: Runner Configuration
- Job Execution Flow: Job Execution Flow
- Blueprint Manager: Blueprint Manager
- Event Handling: Event Handling
- Blueprint Sources: Blueprint Sources
- CLI Reference: CLI Reference
- Blueprint Commands: Blueprint Commands
- Key Management Commands: Key Management Commands
- Deployment Options: Deployment Options
- Development: Development
- Build Environment: Build Environment
- Testing Framework: Testing Framework
- CI/CD: CI/CD
- Advanced Topics: Advanced Topics
- Networking Extensions: Networking Extensions
- Macro System: Macro System
- Custom Protocol Integration: Custom Protocol Integration
- BlueprintRunner: The main facade that provides a builder pattern for constructing a runner.
- BlueprintRunnerBuilder: A builder that allows configuration of producers, consumers, router, and background services.
- FinalizedBlueprintRunner: The internal runner implementation that orchestrates the execution flow.
- BlueprintConfig: A trait for protocol-specific configuration and registration.
- BackgroundService: A trait for services that run in the background during blueprint execution.
Sources: 39-76 101-110 122-129 448-467
The Blueprint Runner is a core component of the Tangle Blueprint framework responsible for orchestrating job execution, managing background services, and handling protocol-specific operations across various blockchain environments.
The Blueprint Runner follows a producer-consumer architecture pattern with a central router for job dispatching and execution. It provides a flexible system configurable for different blockchain protocols.
- BlueprintRunnerBuilder
- FinalizedBlueprintRunner
- Router
- BlueprintConfig
- Producers
- Consumers
- Background Services
- JobCall
- JobResult
- Tangle Protocol
- Eigenlayer Protocol
- Other Protocols
For more information about the overall Blueprint architecture, see Architecture Overview. For details on creating blueprints, see Creating Your First Blueprint.
To configure and set up the Blueprint Runner, use the following methods:
with config & env
.router(router)
.producer(producer)
.consumer(consumer)
.background_service(service)
.with_shutdown_handler(handler)
.run()Utilize the BlueprintRunner::builder() to create a new instance:
BlueprintRunnerBuilder- Add Router:
.router(router) - Add Producer:
.producer(producer) - Add Consumer:
.consumer(consumer) - Add Background Service:
.background_service(service) - Set Shutdown Handler:
.with_shutdown_handler(handler) - Run Blueprint:
.run()
Repeat the .run() method as needed to execute the Blueprint.
The core functionality of the Blueprint Runner revolves around executing jobs:
- Producers generate
JobCallinstances, each with a unique job ID. - The Router routes these calls to the appropriate handler functions.
- Handlers process the jobs and return
JobResultinstances. - Consumers process the job results.
Sources:
When an error occurs, the runner:
- Runs the shutdown handler
- Terminates all background services
- Stops all producers and consumers
- Returns the error to the caller
The Blueprint Runner provides:
- A unified execution environment for different blockchain protocols
- A flexible producer-consumer architecture for job processing
- Protocol-specific configuration and registration capabilities
- Robust error handling and recovery
It serves as the orchestration layer for blockchain applications.
The Blueprint Runner handles protocol-specific registration processes through the BlueprintConfig trait:
check
Yes
check
Yes
No
No
setup
create
until error or shutdown
run() method
requires_registration()?
register()
should_exit_after_registration()?
Return Ok(())
Continue execution
Start background services
Job execution loop
Execute shutdown handler
The runner orchestrates this flow while also managing background services.
Here's a basic example of how to configure and use the Blueprint Runner:
// Create a configuration (protocol-specific)
let config = TangleConfig::new(price_targets).with_exit_after_register(false);
// Create environment
let blueprint_env = BlueprintEnvironment::load()?;
// Create a router
let router = Router::new()
.route(0, async || "Hello, world!") // Route job ID 0 to a simple handler
.route(1, async |input: String| input.to_uppercase()); // Route job ID 1 to a handler that accepts a String
// Create and configure the runner
let result = BlueprintRunner::builder(config, blueprint_env)
.router(router)
.producer(some_producer)
.consumer(some_consumer)
.background_service(some_background_service)
.with_shutdown_handler(async { println!("Shutting down!") })
.run()
.await;
// Handle any errors from the runner
if let Err(e) = result {
eprintln!("Runner failed: {:?}", e);
}The Blueprint Runner orchestrates the flow of jobs through the system using a producer-consumer pattern with a central router.
"Background Services" "Job Consumer" "Job Handler" "Router" "Blueprint Runner" "Job Producer" "Background Services" "Job Consumer" "Job Handler" "Router" "Blueprint Runner" "Job Producer"
Initialization Phase
Job Execution Phase
Termination Phase
alt[Error or Shutdown Signal]
start()
poll for jobs
JobCall
route(JobCall)
handle(JobCall)
JobResult
send(JobResult)
execute shutdown_handler
shutdown
terminate
Producers generate JobCall instances, which are routed through the router to appropriate handlers, resulting in JobResult instances that are then processed by consumers.
- Producer: A component that generates job calls (implements
Stream<Item = Result<JobCall, E>>) - Consumer: A component that processes job results (implements
Sink<JobResult>) - Router: A component that routes job calls to appropriate handlers based on job IDs
The Blueprint Runner is built using a fluent builder pattern that allows for flexible configuration.
// Code snippet for builder pattern would be included here
The runner requires a BlueprintEnvironment that defines the context in which it operates:
influences
BlueprintEnvironment
+String http_rpc_endpoint
+String ws_rpc_endpoint
+String keystore_uri
+Option<PathBuf> data_dir
+ProtocolSettings protocol_settings
+bool test_mode
+load() : Result<BlueprintEnvironment, ConfigError>
+keystore() : Keystore
ProtocolSettings
+None
+Symbiotic
+Tangle(TangleProtocolSettings)
+Eigenlayer(EigenlayerProtocolSettings)
+protocol() : &str
Protocol
+Tangle
+Eigenlayer
+Symbiotic
+from_env() : Result<OptionUnsupported markdown: del
+as_str() : &str
TangleProtocolSettings
EigenlayerProtocolSettings
Sources: 210-241 36-52 105-114
The Blueprint Runner provides comprehensive error handling through a hierarchy of error types:
RunnerError
+ NoRouter
+ NoProducers
+ Keystore(KeystoreError)
+ Networking(NetworkingError)
+ Io(std::io::Error)
+ Config(ConfigError)
+ BackgroundService(String)
+ JobCall(JobCallError)
+ Producer(ProducerError)
+ Consumer(BoxError)
+ Tangle(TangleError)
+ Eigenlayer(EigenlayerError)
+ Other(Box<dyn Error>)
ConfigError
+ MissingTangleRpcEndpoint
+ MissingKeystoreUri
+ MissingBlueprintId
+ MissingServiceId
+ MalformedBlueprintId
+ MalformedServiceId
+ UnsupportedKeystoreUri
+ UnsupportedProtocol
+ UnexpectedProtocol
+ NoSr25519Keypair
+ InvalidSr25519Keypair
+ NoEcdsaKeypair
+ InvalidEcdsaKeypair
+ TestSetup
+ MissingEigenlayerContractAddresses
+ MissingSymbioticContractAddresses
+ Other(Box<dyn Error>)
JobCallError
+ JobFailed(Box<dyn Error>)
+ JobDidntFinish(JoinError)
ProducerError
+ StreamEnded
+ Failed(Box<dyn Error>)The runner supports different blockchain protocols through the BlueprintConfig trait.
For Tangle Network, the runner handles:
- Operator registration with the network
- Service instantiation
- Job execution for Tangle-specific operations
Key components include:
TangleConfig: Configuration for Tangle protocolTangleProtocolSettings: Protocol-specific settingsregister_impl: Implementation of operator registration
// Example of operator registration implementation
fn register_impl() {
// Registration logic here
}Sources:
Eigenlayer support includes both BLS and ECDSA cryptography options for operator registration:
// ECDSA Registration
impl BlueprintConfig {
fn requires_registration_ecdsa_impl(&self) -> bool;
fn register_ecdsa_impl(&self);
}
// BLS Registration
impl BlueprintConfig {
fn requires_registration_bls_impl(&self) -> bool;
fn register_bls_impl(&self);
}For Eigenlayer, the runner supports both ECDSA and BLS cryptography:
- ECDSA configuration through
EigenlayerECDSAConfig - BLS configuration through
EigenlayerBLSConfig - Contract interaction through Alloy and EigenSDK libraries
The Blueprint Runner implements a robust error handling system that distinguishes between different types of errors:
- Configuration errors: Issues with the environment or protocol settings
- Execution errors: Problems during job execution
- Protocol-specific errors: Issues related to specific blockchain protocols
No significant content available.
- Overview: Overview
- Architecture Overview: Architecture Overview
- Key Concepts: Key Concepts
- Protocol Support: Protocol Support
- Installation: Installation
- Creating Your First Blueprint: Creating Your First Blueprint
- Example Blueprints: Example Blueprints
- Blueprint SDK: Blueprint SDK
- Core Components: Core Components
- Job System: Job System
- Router: Router
- Networking: Networking
- Keystore: Keystore
- Blueprint Runner: Blueprint Runner
- Runner Configuration: Runner Configuration
- Job Execution Flow: Job Execution Flow
- Blueprint Manager: Blueprint Manager
- Event Handling: Event Handling
- Blueprint Sources: Blueprint Sources
- CLI Reference: CLI Reference
- Blueprint Commands: Blueprint Commands
- Key Management Commands: Key Management Commands
- Deployment Options: Deployment Options
- Development: Development
- Build Environment: Build Environment
- Testing Framework: Testing Framework
- CI/CD: CI/CD
- Advanced Topics: Advanced Topics
- Networking Extensions: Networking Extensions
- Macro System: Macro System
- Custom Protocol Integration: Custom Protocol Integration
The system supports multiple fetcher types:
- GitHub Fetcher: Downloads binaries from GitHub releases.
- Container Source: Pulls and runs Docker/Podman containers.
- Testing Fetcher: Builds binaries from local source code (primarily for testing).
Each source implementation provides methods to:
- Fetch the blueprint (downloading, building, or pulling).
- Spawn processes or containers.
- Monitor execution status.
- Handle process termination.
fetch(): Initiates the fetching process.get_fetcher_candidates(): Checks source type and retrieves appropriate fetchers.spawn(): Runs the process/container and returns aProcessHandle.Active Gadgets: Manages the active processes and their statuses.
-
Fetch Blueprint:
- Download binary
- Pull image
- Build from source
- Make executable
- Ready to spawn
-
Run Process/Container:
- Utilize
spawn()to execute the process/container.
- Utilize
- tangle.rs
- config.rs
- event_handler.rs
- mod.rs
- main.rs
- utils.rs
- binary.rs
- container.rs
- github.rs
- sources/mod.rs
- testing.rs
The Blueprint Manager subscribes to network events and processes them to maintain the correct state of running blueprints and services. This dynamic execution is based on on-chain activity.
The ProcessHandle structure provides a consistent interface for monitoring and controlling processes:
| Method | Description |
|---|---|
status() |
Returns the current status of the process |
wait_for_status_change() |
Asynchronously waits for the process status to change |
abort() |
Sends a termination signal to the process |
When the Blueprint Manager starts, it performs an initialization sequence to establish its event handling loop:
loop[Event Loop]
run_blueprint_manager()
Initialize keystore
Create source candidates
Initialize Tangle client
Create services client
Request initial event
Return initial event
Query operator blueprints
check_blueprint_events()
handle_tangle_event()
Begin event loop
next_event()
check_blueprint_events()
handle_tangle_event()
This process involves:
- Setting up the keystore for cryptographic operations
- Detecting available source candidates (container, native, etc.)
- Creating clients for interacting with the Tangle Network
- Processing the initial state from the chain
- Starting the event loop for continuous monitoring
When events indicate a blueprint needs attention, the manager verifies the blueprint and manages its services. This involves fetching the blueprint from appropriate sources and spawning instances of services.
Active Services
Blueprint Sources
Blueprint Manager
Tangle Network
check_blueprint_events
needs_update=true
blueprint_registrations
VerifiedBlueprint
fetch & spawn
Tangle Events
Event Polling Loop
Event Processor
Blueprint Verifier
Service Starter
Github Source
Container Source
Testing Source
Running Gadgets
EventPollResult
The VerifiedBlueprint structure manages the lifecycle of a blueprint from verification to service execution. It contains:
- Fetchers for different source types (GitHub, Container, Testing)
- The blueprint definition with metadata and service IDs
- Methods to start services and manage their lifecycle
alt[needs_update == true]
loop[for each service]
alt[fetch successful]
loop[for each fetcher]
loop[for each blueprint]
Process event
Return EventPollResult
Pass event & result
Check available sources
Query updated blueprints
Create VerifiedBlueprint
Prepare fetcher candidates
start_services_if_needed()
fetch()
spawn()
Add to active gadgets
Cleanup stale services
When a blueprint needs to be instantiated, the Blueprint Manager selects appropriate sources based on the blueprint definition and system capabilities.
The Blueprint Manager processes several event types from the Tangle Network, each triggering specific behaviors. The event processing is handled by the check_blueprint_events function, which examines the events in each block and determines the appropriate actions:
- PreRegistration: Adds blueprint IDs to the registration queue when the operator is selected.
- Registered: Signals that a blueprint has been registered and services need to be updated.
- Unregistered: Triggers cleanup of services associated with the unregistered blueprint.
- ServiceInitiated: Indicates that a new service has been started for a blueprint.
- JobCalled: Logs when a job is called (primarily for informational purposes).
- JobResultSubmitted: Logs when a job result is submitted (primarily for informational purposes).
The Blueprint Manager tracks all running services and manages their lifecycle based on on-chain events.
- Operator selected
- Blueprint registered
- Event processing
- Fetch error
- Fetch successful
- Service started
- Blueprint unregistered
- Service crashed
- Registration mode
- Auto-restart
- PreRegistration
- Registered
- Fetching
- Failed
- ServiceStarting
- Running
- Unregistered
- Error
- Finished
- Status Monitoring: Services report their status (Running, Finished, Error).
- Auto-Restart: Failed services are restarted on the next event cycle.
- Cleanup: Services for unregistered blueprints are terminated and removed.
- Registration Mode: Special execution mode where services run once and exit.
The Blueprint Manager uses configuration from multiple sources to set up event handling:
- BlueprintManagerConfig: Core configuration for the manager
- BlueprintEnvironment: Environment configuration for blueprints
- SourceCandidates: Available sources for fetching blueprints
When services are spawned, they receive environment variables and command-line arguments that allow them to connect to the appropriate resources:
| Environment Variable | Purpose |
|---|---|
HTTP_RPC_URL |
HTTP RPC endpoint for the Tangle Network |
WS_RPC_URL |
WebSocket RPC endpoint for the Tangle Network |
KEYSTORE_URI |
Path to the keystore for cryptographic operations |
BLUEPRINT_ID |
ID of the blueprint being executed |
SERVICE_ID |
ID of the specific service within the blueprint |
PROTOCOL |
Protocol the service should use (Tangle, EVM, etc.) |
DATA_DIR |
Directory for service data storage |
REGISTRATION_MODE_ON |
Flag indicating if the service is in registration mode |
Sources:
The event handling system is designed to be resilient to failures:
- Source Selection Fallbacks: If a preferred source fails, the system tries alternative sources.
- Service Recovery: Failed services are detected and can be restarted.
- Error Logging: Comprehensive error logging provides visibility into issues.
- Graceful Shutdown: Services are properly terminated when the manager shuts down.
Sources:
No significant content available.
- Overview: Overview
- Architecture Overview: Architecture Overview
- Key Concepts: Key Concepts
- Protocol Support: Protocol Support
- Getting Started: Getting Started
- Installation: Installation
- Creating Your First Blueprint: Creating Your First Blueprint
- Example Blueprints: Example Blueprints
- Blueprint SDK: Blueprint SDK
- Core Components: Core Components
- Job System: Job System
- Router: Router
- Networking: Networking
- Keystore: Keystore
- Blueprint Runner: Blueprint Runner
- Runner Configuration: Runner Configuration
- Job Execution Flow: Job Execution Flow
- Blueprint Manager: Blueprint Manager
- Event Handling: Event Handling
- Blueprint Sources: Blueprint Sources
- CLI Reference: CLI Reference
- Blueprint Commands: Blueprint Commands
- Key Management Commands: Key Management Commands
- Deployment Options: Deployment Options
- Development: Development
- Build Environment: Build Environment
- Testing Framework: Testing Framework
- CI/CD: CI/CD
- Advanced Topics: Advanced Topics
- Networking Extensions: Networking Extensions
- Macro System: Macro System
- Custom Protocol Integration: Custom Protocol Integration
- Methods:
+fetch() : Result<...>: Retrieves the blueprint code or binary from its source.+spawn(source_candidates, env, service, args, env_vars) : Result<ProcessHandle>: Creates a running process for the blueprint.+blueprint_id() : u64: Returns the unique ID of the blueprint.+name() : String: Returns a human-readable name for the source.
- Fields:
-status: UnboundedReceiver<Status>-cached_status: Status-abort_handle: oneshot::Sender<...>
- Methods:
+new(status, abort_handle) : ProcessHandle+status() : Status+wait_for_status_change() : Option<Status>+abort() : bool
RunningFinishedError
Sources: 11-66
- tangle.rs
- forge.rs
- main.rs
- config.rs
- event_handler.rs
- mod.rs
- manager_main.rs
- utils.rs
- binary.rs
- container.rs
- github.rs
- sources_mod.rs
- testing.rs
Blueprint Sources are responsible for:
- Fetching blueprint code/binaries from various locations
- Preparing the execution environment
- Spawning and monitoring blueprint processes
- Managing process lifecycle and cleanup
For more details on the Blueprint Manager that orchestrates these sources, refer to the Blueprint Manager.
All blueprint sources implement the BlueprintSourceHandler trait, which defines the standard interface for source operations.
GitHub sources fetch pre-built binary executables from GitHub releases. They handle platform-specific binary selection, download, verification, and execution.
1. fetch()
2. get_binary()
3. Check if exists
Yes -> Verify hash
No -> Download from GitHub
4. Verify hash
5. make_executable()
6. spawn()
Key components:
GithubBinaryFetcher: Handles GitHub-specific download and execution logic- Binary selection based on platform (OS and architecture)
- Hash verification to ensure binary integrity
- Execution with appropriate permissions
Container sources pull and run Docker/Podman container images, providing an isolated execution environment.
Sources: [52-66](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/manager/src/sources/mod.rs#L52-L66)
[30-131](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/manager/src/executor/event_handler.rs#L30-L131)
Sources: [33-126](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/manager/src/sources/github.rs#L33-L126)
The ProcessHandle provides a standardized way to interact with running blueprint processes, supporting status monitoring and graceful termination.
The Blueprint framework supports three primary source types:
| Source Type | Description | Use Case | Configuration |
|---|---|---|---|
| GitHub | Fetches pre-built binaries from GitHub releases | Production deployments | GithubFetcher struct |
| Container | Pulls and runs Docker/Podman container images | Isolated deployments | ImageRegistryFetcher struct |
| Test | Builds executables from local source code | Development and testing | TestFetcher struct |
-
Fetching Docker Images
fetch()docker pullDockerBuilder- Prepare Environment
- Adjust URLs for container network
- Mount keystore volume
- Start Container
spawn()- Monitor container status via Docker API
- Graceful shutdown and cleanup
// Example of fetching and starting a Docker containerKey Considerations:
- Network mapping for services (host.containers.internal)
- Keystore volume mounting for secure access
- Status monitoring through Docker API
-
Test Sources
- Building Executables
fetch()- Parse package info
cargo build- Make executable
spawn()
- TestSourceFetcher
- Determine repository root
- Parse cargo package info
- Build executable from source
- Set executable permissions
- Execute binary with arguments
// Example of building and executing a test source - Building Executables
TestSourceFetcher: Builds and executes from local source.- Cargo integration: Supports Rust blueprint compilation.
- Development-focused: Includes support for hot reloading.
When a blueprint needs to be executed, the Blueprint Manager follows this source selection process:
Trigger
Query
Contains
Try best match
Yes
No
Monitor
Tangle Network Event
Blueprint Manager
On-chain Blueprint Data
List of Available Sources
Source Selection
Available Source Handlers
User preferred source type
Source available?
Fetch and Spawn Blueprint
Try next source
Handle Process Status
- Gathers all source candidates from on-chain blueprint definition.
- Filters based on available source handlers on the system.
- Prioritizes based on user preference (if specified).
- Falls back to next available source if primary fails.
- Handles test mode separately with specific requirements.
Sources can be configured through various mechanisms:
The SourceCandidates struct determines which source technologies are available on the system:
pub struct SourceCandidates {
pub container: Option<Url>, // Docker/Podman socket URL
pub wasm_runtime: Option<String>, // WASM runtime path if available
pub preferred_source: SourceType, // User's preferred source type
}The CLI provides options for configuring source preferences:
--preferred-source, -p <SOURCE_TYPE>
The preferred source type to use (container, native, wasm) [default: native]
--podman-host, -p <URL>
The location of the Podman/Docker socket [default: unix:///var/run/docker.sock]
Each source implementation is responsible for spawning and managing the lifecycle of its processes:
ProcessHandle "Blueprint Process" "Source Handler" "Blueprint Manager"
alt[Status == Error]
loop[Monitor]
alt[Shutdown]
fetch() Result<()> spawn(args, env_vars) Create Process ProcessHandle
status() Status abort() Kill abort() Kill
All blueprint sources receive a standardized set of environment variables and command-line arguments:
| Variable | Description | Example |
|---|---|---|
| HTTP_RPC_URL | HTTP RPC endpoint URL | http://127.0.0.1:9944 |
| WS_RPC_URL | WebSocket RPC endpoint URL | ws://127.0.0.1:9944 |
| KEYSTORE_URI | Path to the keystore | ./keystore |
| BLUEPRINT_ID | The blueprint ID | 42 |
| SERVICE_ID | The service ID | 1 |
| PROTOCOL | The protocol (Tangle, Eigenlayer) | tangle |
| CHAIN | The chain to connect to | testnet |
| DATA_DIR | Directory for blueprint data | ./data/blueprint-42-service |
The Blueprint Sources are tightly integrated with the Blueprint Manager:
Source System
Blueprint Manager
Trigger fetch
Create Sources
Select
Select
Select
Create
Create
Create
Status Updates
Start/Stop
Services
Event Handler
Blueprint Verifier
Service Manager
Source Handler
GitHub Source
Container Source
Test Source
Process Monitor
Sources: 250-393
The Manager initiates source operations based on:
- Chain events indicating new/updated blueprints
- Service requests and lifecycle events
- Registration requirements for services
Blueprint Sources provide a flexible mechanism for fetching, building, and running blueprints from various locations. The modular design allows for different deployment strategies while maintaining consistent environment variables and process management. The source system is designed to be extensible, allowing for new source types to be added in the future as needed. Auto-refresh not enabled yet.
No significant content available.
- Overview: Overview
- Architecture Overview: Architecture Overview
- Key Concepts: Key Concepts
- Protocol Support: Protocol Support
- Installation: Installation
- Creating Your First Blueprint: Creating Your First Blueprint
- Example Blueprints: Example Blueprints
- Blueprint SDK: Blueprint SDK
- Core Components: Core Components
- Job System: Job System
- Router: Router
- Networking: Networking
- Keystore: Keystore
- Blueprint Runner: Blueprint Runner
- Runner Configuration: Runner Configuration
- Job Execution Flow: Job Execution Flow
- Blueprint Manager: Blueprint Manager
- Event Handling: Event Handling
- Blueprint Sources: Blueprint Sources
- CLI Reference: CLI Reference
- Blueprint Commands: Blueprint Commands
- Key Management Commands: Key Management Commands
- Deployment Options: Deployment Options
- Development: Development
- Build Environment: Build Environment
- Testing Framework: Testing Framework
- CI/CD: CI/CD
- Advanced Topics: Advanced Topics
- Networking Extensions: Networking Extensions
- Macro System: Macro System
- Custom Protocol Integration: Custom Protocol Integration
The context-derive crate provides macros with the following features:
- Standard context derivation (
stdfeature) - Tangle-specific context derivation (
tanglefeature) - EVM-specific context derivation (
evmfeature) - Networking-specific context derivation (
networkingfeature)
These macros complement the main FromRef derive macro by providing more specialized context trait implementations.
- Standard Context Derives
- Protocol-Specific Derives
- Tangle Context Derives
- EVM Context Derives
- Networking Context Derives
- Tangle Jobs
- EVM Jobs
- Networking Jobs
Sources: 1-53
- Cargo.toml
- context-derive Cargo.toml
- lib.rs
- rand_protocol.rs
- behaviour.rs
- peers.rs
- service.rs
- service_handle.rs
- mod.rs
- tests.rs
- stores Cargo.toml
The Blueprint framework provides several custom macros:
debug_job: Improves error messages for job functions.FromRef: Derive macro for implementing context extraction.load_abi: Loads Ethereum contract ABIs (available with theevmfeature).
- Better error messages
- Automatic context extraction
- ABI integration
#[debug_job]
#[derive(FromRef)]
load_abi!()The macro system enhances the runtime experience with compile-time tools.
Includes a separate blueprint-context-derive crate with additional procedural macros for deriving Context Extension traits.
- Job Function: Macro Processing
- Job Constraints Checks:
- Valid Job?
- Yes: Pass Through Unchanged
- No: Generate Specific Error
- Is Async Function?
- Valid Context Type?
- Compatible Return Type?
- Valid Job?
The FromRef derive macro automatically implements the FromRef trait for each field in a struct, useful for extracting specific components from a larger context object.
// Sources: [31-170](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/macros/src/lib.rs#L31-L170)
// [debug_job.rs](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/macros/src/debug_job.rs)The debug_job attribute macro significantly improves error messages when working with job functions, making it easier to understand why a function doesn't correctly implement the Job trait.
In the Blueprint framework, job functions need to satisfy certain constraints:
- They must be async functions
- They need appropriate parameter types
- They must have compatible return types
Apply the debug_job attribute to any function meant to be used as a job:
use blueprint_sdk::macros::debug_job;
#[debug_job]
async fn my_job() -> &'static str {
"Hello, world"
}The debug_job macro can automatically infer the context type from a Context parameter:
use blueprint_sdk::extract::Context;
#[debug_job]
async fn job(Context(context): Context<AppContext>) {
// The macro infers AppContext as the context type
}You can also explicitly specify the context type:
#[debug_job(context = AppContext)]
async fn job(Context(app_ctx): Context<AppContext>, Context(inner_ctx): Context<InnerContext>) {
// ...
}- The macro doesn't work for functions in an
implblock that don't have aselfparameter - It has no effect when compiled with the release profile
Apply the #[derive(FromRef)] attribute to a struct that serves as a context container:
use blueprint_sdk::extract::{Context, FromRef};
#[derive(FromRef, Clone)]
struct AppContext {
keystore: Keystore,
database_pool: DatabasePool,
#[from_ref(skip)]
secret_token: String,
}Once implemented, you can extract individual components using the Context extractor:
async fn job(Context(keystore): Context<Keystore>) {
// Only extract the keystore from AppContext
}
async fn another_job(Context(database): Context<DatabasePool>) {
// Only extract the database pool from AppContext
}This pattern enables clean dependency injection where each job function only accesses the components it needs.
The load_abi procedural macro is available when the evm feature is enabled. It loads the JSON ABI (Application Binary Interface) for an Ethereum smart contract.
use blueprint_sdk::macros::load_abi;// Load ABI from a file path
const CONTRACT_ABI: &str = load_abi!("path/to/contract.json");- Load ABI Macro:
load_abi!() - Parse JSON
- Validate ABI Format
- Embed in Compiled Code
- EVM Contract Client
- Send Transactions
- Call Contract Functions
The Blueprint macro system is implemented as a set of procedural macros that operate at compile time.
## Best Practices
## Best Practices
When working with the Blueprint Macro System, consider these best practices:
1. **Use `debug_job` during development**:
- Apply it to job functions to get better error messages.
- Remove it in production for optimal performance (it automatically disables itself in release builds).
2. **Organize context with `FromRef`**:
- Keep context objects clean and modular.
- Use `#[from_ref(skip)]` for sensitive fields that shouldn't be accessible directly.
- Ensure all extractable fields implement `Clone`.
3. **When working with EVM contracts**:
- Use `load_abi!` to ensure compile-time validation of ABI files.
- Keep ABI files in a standard location for consistency.
4. **Testing with macros**:
- Test derived implementations to ensure they behave as expected.
- Be aware that some macros might have different behavior in debug and release builds.
## Troubleshooting
- **Issue**: Syntax error in textmermaid version 11.6.0
- **Action**: Ask Devin about tangle-network/blueprint
## https://deepwiki.com/tangle-network/blueprint/8.3-custom-protocol-integration
## Documentation
## Introduction to Tangle Network's Blueprint Framework
### Tangle Network's Blueprint Framework Overview
- **Overview**: [Overview](https://deepwiki.com/tangle-network/blueprint/1-overview)
- **Architecture Overview**: [Architecture Overview](https://deepwiki.com/tangle-network/blueprint/1.1-architecture-overview)
- **Key Concepts**: [Key Concepts](https://deepwiki.com/tangle-network/blueprint/1.2-key-concepts)
- **Protocol Support**: [Protocol Support](https://deepwiki.com/tangle-network/blueprint/1.3-protocol-support)
### Getting Started
- **Installation**: [Installation](https://deepwiki.com/tangle-network/blueprint/2.1-installation)
- **Creating Your First Blueprint**: [Creating Your First Blueprint](https://deepwiki.com/tangle-network/blueprint/2.2-creating-your-first-blueprint)
- **Example Blueprints**: [Example Blueprints](https://deepwiki.com/tangle-network/blueprint/2.3-example-blueprints)
### Core Components
- **Blueprint SDK**: [Blueprint SDK](https://deepwiki.com/tangle-network/blueprint/3-blueprint-sdk)
- **Job System**: [Job System](https://deepwiki.com/tangle-network/blueprint/3.2-job-system)
- **Router**: [Router](https://deepwiki.com/tangle-network/blueprint/3.3-router)
- **Networking**: [Networking](https://deepwiki.com/tangle-network/blueprint/3.4-networking)
- **Keystore**: [Keystore](https://deepwiki.com/tangle-network/blueprint/3.5-keystore)
### Blueprint Runner
- **Runner Configuration**: [Runner Configuration](https://deepwiki.com/tangle-network/blueprint/4.1-runner-configuration)
- **Job Execution Flow**: [Job Execution Flow](https://deepwiki.com/tangle-network/blueprint/4.2-job-execution-flow)
### Blueprint Manager
- **Event Handling**: [Event Handling](https://deepwiki.com/tangle-network/blueprint/5.1-event-handling)
- **Blueprint Sources**: [Blueprint Sources](https://deepwiki.com/tangle-network/blueprint/5.2-blueprint-sources)
### CLI Reference
- **Blueprint Commands**: [Blueprint Commands](https://deepwiki.com/tangle-network/blueprint/6.1-blueprint-commands)
- **Key Management Commands**: [Key Management Commands](https://deepwiki.com/tangle-network/blueprint/6.2-key-management-commands)
- **Deployment Options**: [Deployment Options](https://deepwiki.com/tangle-network/blueprint/6.3-deployment-options)
### Development
- **Build Environment**: [Build Environment](https://deepwiki.com/tangle-network/blueprint/7.1-build-environment)
- **Testing Framework**: [Testing Framework](https://deepwiki.com/tangle-network/blueprint/7.2-testing-framework)
- **CI/CD**: [CI/CD](https://deepwiki.com/tangle-network/blueprint/7.3-cicd)
### Advanced Topics
- **Networking Extensions**: [Networking Extensions](https://deepwiki.com/tangle-network/blueprint/8.1-networking-extensions)
- **Macro System**: [Macro System](https://deepwiki.com/tangle-network/blueprint/8.2-macro-system)
- **Custom Protocol Integration**: [Custom Protocol Integration](https://deepwiki.com/tangle-network/blueprint/8.3-custom-protocol-integration)
## Understanding the Architecture
### Dependencies
- **Blueprint Runner**
- **Client Architecture**
- `blueprint-client-core`
- **Client trait**
- **Context trait**
- `blueprint-client-tangle`
- `blueprint-client-evm`
- `blueprint-client-eigenlayer`
- **Custom Protocol Client**
- `TangleContext`
- `EVMContext`
- `EigenlayerContext`
- **Custom Protocol Context**
- **Client Registry**
- `tangle-subxt`
- `alloy`
- `eigensdk`
- **Custom Protocol SDK**
### Code Snippets
```toml
# Cargo.toml dependencies
Sources:
- [blueprint-client-core](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/clients/core/Cargo.toml)
- [blueprint-client-tangle](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/clients/tangle/Cargo.toml#L15-L30)
- [blueprint-client-evm](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/clients/evm/Cargo.toml#L15-L41)
- [blueprint-client-eigenlayer](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/clients/eigenlayer/Cargo.toml#L15-L42)
The Blueprint client system is based on two key abstractions:
- Client Core Interface: Defines the core functionality that all protocol clients must implement.
- Context Providers: Protocol-specific contexts that provide access to blockchain functionality.
At the heart of the protocol integration is the Client trait defined in the blueprint-client-core crate. This trait defines the essential methods that all protocol implementations must provide.
Client
+deploy(specs, options) : Future<DeploymentResult>
+register(options) : Future<RegisterResult>
+interact(interaction, options) : Future<InteractionResult>
+query(options) : Future<QueryResult>
+create_context() : Future<Context>
TangleClient
EVMClient
EigenlayerClient
CustomProtocolClient
Context
+protocol_name() : String
+chain_id() : ChainId
TangleContext
EVMContext
EigenlayerContext
CustomProtocolContextThis document provides guidance on extending the Tangle Blueprint framework to support custom blockchain protocols beyond the built-in support for Tangle Network, EVM, and Eigenlayer.
The Blueprint framework uses a modular client architecture that enables seamless integration with different blockchain protocols. Each protocol integration follows a common pattern while allowing for protocol-specific implementation details.
Relevant source files:
- Cargo.lock
- Cargo.toml (root)
- Cargo.toml (cli)
- Cargo.toml (eigenlayer client)
- Cargo.toml (evm client)
- Cargo.toml (tangle client)
- Cargo.toml (contexts)
- Cargo.toml (manager)
- Cargo.toml (sdk)
- Cargo.toml (anvil testing utils)
- Cargo.toml (core testing utils)
- Cargo.toml (eigenlayer testing utils)
- Cargo.toml (tangle testing utils)
To implement support for a custom blockchain protocol, you need to create a new client crate that implements the core client interface for your protocol.
Start by creating a new crate for your custom protocol client:
blueprint-client-myprotocol/
├── Cargo.toml
└── src/
├── lib.rs
├── client.rs
├── context.rs
└── types.rs
The Cargo.toml file should include dependencies on the core Blueprint components and your protocol-specific dependencies:
[dependencies]
blueprint-core = { workspace = true }
blueprint-std = { workspace = true }
blueprint-client-core = { workspace = true }
# Protocol-specific dependencies
myprotocol-sdk = "x.y.z"
Modify the client registry to include your custom protocol client:
register_client
with_client
get_client::()
ClientRegistry
TangleClient
EVMClient
EigenlayerClient
MyProtocolClient
Blueprint RunnerAdd feature flags for your protocol in the Blueprint SDK:
[features]
# Existing features
tangle = [...]
evm = [...]
eigenlayer = [...]
# Your custom protocol
myprotocol = [
"dep:blueprint-client-myprotocol",
"blueprint-contexts/myprotocol",
"blueprint-context-derive?/myprotocol",
"blueprint-testing-utils?/myprotocol",
"blueprint-runner/myprotocol",
]To make your protocol available through the Blueprint CLI, update the CLI package:
// Add support for your protocol in the CLI commands
#[derive(clap::ValueEnum, Clone, Debug)]
pub enum Protocol {
Tangle,
EVM,
Eigenlayer,
MyProtocol, // Add your protocol here
}Test your protocol integration by creating a simple Blueprint that uses your custom protocol. Ensure that:
- The client can be registered with the Runner
- Deployment operations work correctly
- Interactions with the blockchain function as expected
- Contexts provide the correct protocol-specific functionality
#[test]
fn test_custom_protocol_integration() {
let runner = BlueprintRunner::new()
.with_client(MyProtocolClient::new(...))
.build();
}Integrating a custom blockchain protocol with the Tangle Blueprint framework involves:
- Implementing the Client trait for your protocol
- Creating a context provider for protocol-specific functionality
- Adding any necessary cryptographic support
- Creating testing utilities
- Updating the Blueprint SDK and CLI to support your protocol
By following the patterns established by existing protocol implementations (Tangle, EVM, and Eigenlayer), you can seamlessly extend the Blueprint framework to support additional blockchain protocols.
In client.rs, implement the Client trait for your custom protocol:
use blueprint_client_core::{Client, DeploymentResult, RegisterResult, InteractionResult, QueryResult};
use async_trait::async_trait;
pub struct MyProtocolClient {
// Protocol-specific fields
}
#[async_trait]
impl Client for MyProtocolClient {
async fn deploy(&self, specs: DeploymentSpecs, options: DeploymentOptions) -> Result<DeploymentResult, Error> {
// Implementation for deploying to your protocol
}
async fn register(&self, options: RegisterOptions) -> Result<RegisterResult, Error> {
// Implementation for registering with your protocol
}
async fn interact(&self, interaction: Interaction, options: InteractionOptions) -> Result<InteractionResult, Error> {
// Implementation for interacting with your protocol
}
async fn query(&self, options: QueryOptions) -> Result<QueryResult, Error> {
// Implementation for querying your protocol
}
async fn create_context(&self) -> Result<Box<dyn Context>, Error> {
// Create and return a protocol-specific context
Ok(Box::new(MyProtocolContext::new(...)))
}
}In context.rs, implement a context provider for your protocol:
use blueprint_contexts::Context;
pub struct MyProtocolContext {
// Protocol-specific state
}
impl Context for MyProtocolContext {
fn protocol_name(&self) -> String {
"myprotocol".to_string()
}
fn chain_id(&self) -> ChainId {
// Return the appropriate chain ID
// Additional protocol-specific methods
}
}Add your protocol to the context providers:
[features]
# Existing features
evm = ["blueprint-clients/evm"]
eigenlayer = ["blueprint-clients/eigenlayer"]
tangle = ["dep:tangle-subxt", "blueprint-clients/tangle"]
# Your custom protocol
myprotocol = ["blueprint-clients/myprotocol"]Depending on your protocol, you may need to implement additional components:
For protocol-specific cryptography, create a new crate in the crypto module:
blueprint-crypto-myprotocol/
├── Cargo.toml
└── src/
├── lib.rs
└── keys.rs
Implement the necessary cryptographic primitives for your protocol, such as key generation, signing, and verification.
Create testing utilities for your protocol:
blueprint-myprotocol-testing-utils/
├── Cargo.toml
└── src/
├── lib.rs
└── mock.rs
These utilities should provide mocks and helpers for testing blueprints that use your protocol.
Here's a complete flow for implementing and integrating a custom protocol:
[Implementation details not provided due to unsupported markdown]
// Test deployment
let result = runner.deploy_to::<MyProtocolClient>(...).await;
assert!(result.is_ok());
// Test interaction
let result = runner.interact_with::<MyProtocolClient>(...).await;
assert!(result.is_ok());- Maintain Consistent Interfaces: Ensure your client implementation follows the same patterns as the existing protocol implementations.
- Comprehensive Testing: Create thorough tests for all aspects of your protocol integration.
- Error Handling: Provide clear, protocol-specific error types and comprehensive error handling.
- Documentation: Document protocol-specific behaviors and limitations.
- Feature Isolation: Use feature flags to ensure your protocol implementation is only included when needed.
No significant content available.
- Overview: Overview
- Architecture Overview: Architecture Overview
- Key Concepts: Key Concepts
- Protocol Support: Protocol Support
- Installation: Installation
- Creating Your First Blueprint: Creating Your First Blueprint
- Example Blueprints: Example Blueprints
- Blueprint SDK: Blueprint SDK
- Core Components: Core Components
- Job System: Job System
- Router: Router
- Networking: Networking
- Keystore: Keystore
- Blueprint Runner: Blueprint Runner
- Runner Configuration: Runner Configuration
- Job Execution Flow: Job Execution Flow
- Blueprint Manager: Blueprint Manager
- Event Handling: Event Handling
- Blueprint Sources: Blueprint Sources
- CLI Reference: CLI Reference
- Blueprint Commands: Blueprint Commands
- Key Management Commands: Key Management Commands
- Deployment Options: Deployment Options
- Development: Development
- Build Environment: Build Environment
- Testing Framework: Testing Framework
- CI/CD: CI/CD
- Advanced Topics: Advanced Topics
- Networking Extensions: Networking Extensions
- Macro System: Macro System
- Custom Protocol Integration: Custom Protocol Integration
Before setting up the build environment, ensure you have the following prerequisites installed on your system:
- Git
- A Unix-like operating system (Linux, macOS) or Windows with WSL
- Internet connectivity for downloading dependencies
For the recommended Nix-based setup, you'll need:
- Nix package manager with flakes enabled
For manual setup, you'll need:
- Rust 1.86 or later
- OpenSSL development libraries
- GMP (GNU Multiple Precision Arithmetic Library)
- Protocol Buffers compiler
- Node.js 22 (for TypeScript/web components)
There are two primary methods for setting up the build environment:
Protocol Dependencies
Development Tools
Components
Setup Options
Recommended
Alternative
Nix Development Environment
Complete Environment
Manual Setup
Rust Toolchain (1.86)
Development Tools
Protocol Dependencies
Cargo Extensions
Foundry (Ethereum)
Node.js Ecosystem
OpenSSL
GMP Library
Protocol Buffers
The project provides a Nix flake for setting up a consistent development environment across all supported platforms. This is the recommended approach as it ensures all dependencies are correctly configured and versioned.
To enter the development environment with Nix:
-
Clone the repository:
git clone https://github.com/tangle-network/blueprint cd blueprint -
Enter the Nix development shell:
nix develop
This sets up a complete environment with all required tools and dependencies.
If you prefer not to use Nix, you can manually install the required dependencies:
-
Install Rust 1.86 using rustup:
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh rustup default 1.86 rustup component add cargo rustfmt clippy rust-src
-
Install system dependencies (examples shown for Ubuntu):
sudo apt update sudo apt install build-essential pkg-config libssl-dev libgmp-dev protobuf-compiler clang libclang-dev
-
Install Node.js 22 for TypeScript components:
# Using nvm or your preferred Node.js installation method nvm install 22 npm install -g yarn typescript -
Install recommended Cargo extensions:
cargo install cargo-nextest cargo-expand cargo-dist
-
Install Foundry for Ethereum development:
curl -L https://foundry.paradigm.xyz | bash foundryup
The project uses Rust 1.86 with specific components as defined in the rust-toolchain.toml file:
Required Components
- Rust Toolchain: Rust 1.86
- cargo
- rustfmt
- clippy
- rust-src
The Nix development environment provides a comprehensive set of tools and libraries for Blueprint development.
Relevant source files:
- nextest.toml
- .gitignore
- error.rs
- config.rs
- mod.rs
- mod.rs (storage)
- substrate.rs
- aggregator_selection.rs
- flake.lock
- flake.nix
- rust-toolchain.toml
The Blueprint framework supports multiple keystore storage backends:
- InMemoryStorage
- FileStorage (std)
- SubstrateStorage (substrate-keystore)
- AWS Signer
- GCP Signer
- Ledger
Key functions:
Keystore::new()- Local Storage
- Remote Storage
- Storage Module: mod.rs
- Configuration: config.rs
- Substrate Storage: substrate.rs
- The Keystore is responsible for securely managing cryptographic keys.
- The storage module handles the persistence of keys.
- Configuration settings dictate how the Keystore operates within the application.
Refer to the linked files for specific implementations and examples.
When working with the Blueprint project, several directories and files are created during the build process:
| Directory/File | Description |
|---|---|
/target |
The main Rust build output directory containing compiled artifacts |
/crates/*/target |
Individual crate build outputs (if built separately) |
/node_modules |
Node.js dependencies for TypeScript/web components |
/contracts/out |
Compiled smart contract artifacts from Foundry |
/contracts/cache |
Foundry cache for smart contract compilation |
blueprint.lock |
Lock file for Blueprint dependencies |
blueprint.json |
Configuration file for Blueprint projects |
/cache |
General cache directory for the project |
.direnv |
Directory created by direnv when using Nix |
Sources: 1-28
The project uses Nextest for Rust test running with custom profiles:
| Profile | Description |
|---|---|
ci |
Profile for continuous integration with immediate test failure output |
serial |
Single-threaded test execution for tests that cannot run in parallel |
To run tests with a specific profile:
cargo nextest run --profile ci
The build environment supports various cryptographic backends through feature flags, affecting dependencies included at compile time and available functionality:
Storage Backend Features
Key Type Features
Cryptographic Features
Feature Flags
Enabled Key Types
Storage Backends
ecdsasr25519-schnorrkel
zebra (ed25519)
bls
bn254
sp-core
tangle
std (FileStorage)
substrate-keystore
aws-signer
gcp-signer
ledger-browser
ledger-node
You can enable or disable features using Cargo's feature flag system. For example:
cargo build --features "ecdsa sr25519-schnorrkel std"
The typical development workflow involves these steps:
Passes
Fails
Deployment
cargo-tangle
Deploy Commands
Test Commands
cargo nextest run
cargo test
Build Commands
cargo build
cargo clippy
Enter Development Environment
Edit Code
Build Project
Run Tests
Prepare Deployment
Deploy to Target Network
| Issue | Solution |
|---|---|
| Missing crypto feature | Add the required feature to your build command or dependency specification |
| Keystore access errors | Check file permissions if using FileStorage, or connectivity for remote keystores |
| Protocol buffer compile errors | Ensure protobuf-compiler is installed and on your PATH |
| Build failures on macOS | Ensure Apple frameworks are available (Security, SystemConfiguration) |
| Slow compilation | On Linux, ensure mold linker is enabled; on macOS consider using lld |
For enhanced development experience, the environment includes:
- rust-analyzer: Provides IDE integrations for Rust
- cargo-expand: Useful for debugging macros by expanding them
- cargo-dist: Creates distribution packages
- taplo: TOML file formatter and linter
- foundry: Ethereum development framework for smart contracts
- TypeScript: For web component development
No substantial content available. Please refer to Devin for detailed information on the Tangle Network's Blueprint Framework.
- Overview
- Architecture Overview
- Key Concepts
- Protocol Support
- Getting Started
- Installation
- Creating Your First Blueprint
- Example Blueprints
- Blueprint SDK
- Core Components
- Job System
- Router
- Networking
- Keystore
- Blueprint Runner
- Runner Configuration
- Job Execution Flow
- Blueprint Manager
- Event Handling
- Blueprint Sources
- CLI Reference
- Blueprint Commands
- Key Management Commands
- Deployment Options
- Development
- Build Environment
- Testing Framework
- CI/CD
- Advanced Topics
- Networking Extensions
- Macro System
- Custom Protocol Integration
These extensions build upon the solid foundation of Blueprint's core networking layer, allowing developers to implement advanced distributed protocols without handling low-level networking complexities.
- Networking Extensions
- Overview
- Aggregated Signature Gossip Extension
- Architecture
- Features
- Dependencies
- Round-Based Protocol Extension
- Integration Example
- Example: Randomness Generation Protocol
- Using the Round-Based Extension
- Enabling Extensions
Blueprint's networking extensions enhance core networking functionality with specialized capabilities for distributed protocols. The SDK provides a modular networking architecture that can be extended with specialized components for different protocol requirements.
- Aggregated Signature Gossip Extension: Enables efficient collection and aggregation of cryptographic signatures.
- Round-Based Protocol Extension: Facilitates implementation of synchronous round-based protocols.
Relevant source files:
- core/Cargo.toml
- crypto/Cargo.toml
- evm-extra/Cargo.toml
- macros/src/lib.rs
- networking/Cargo.toml
- extensions/agg-sig-gossip/Cargo.toml
- extensions/round-based/Cargo.toml
- extensions/round-based/tests/rand_protocol.rs
- blueprint_protocol/behaviour.rs
- discovery/peers.rs
- service.rs
- service_handle.rs
- test_utils/mod.rs
- tests/mod.rs
- producers-extra/Cargo.toml
- router/Cargo.toml
- stores/local-database/Cargo.toml
- testing-utils/Cargo.toml
To use these extensions in your Blueprint project, you need to add them as dependencies and enable the required features:
[dependencies]
blueprint-networking-agg-sig-gossip-extension = "0.1.0-alpha.3"
blueprint-crypto = { version = "0.1.0-alpha.4", features = ["aggregation"] }[dependencies]
blueprint-networking-round-based-extension = "0.1.0-alpha.4"
round-based = "0.1.0"Blueprint's networking extensions provide specialized capabilities for building complex distributed protocols:
- The Aggregated Signature Gossip Extension facilitates efficient signature collection and aggregation, essential for consensus protocols and threshold cryptography.
- The Round-Based Protocol Extension enables implementation of synchronized multi-party computation protocols with well-defined rounds and message flows.
The Aggregated Signature Gossip Extension provides functionality for collecting and aggregating cryptographic signatures over a peer-to-peer network, useful for consensus algorithms and threshold signature schemes.
Supported Signature Schemes
- BLS Signatures
- BN254 Curve
- libp2p::gossipsub
The Aggregated Signature Gossip Extension provides:
- Efficient peer-to-peer distribution of signatures
- Aggregation of signatures from multiple participants
- Support for various signature schemes including BLS and BN254
- Integration with Blueprint's crypto library
The extension relies on the following components:
| Dependency | Purpose |
|---|---|
blueprint-core |
Core functionality |
blueprint-crypto |
Cryptographic operations (with aggregation feature) |
blueprint-networking |
Network communication |
libp2p |
Underlying p2p network stack |
Sources:
The Round-Based Protocol Extension integrates with the round-based crate to support protocols that operate in synchronized rounds, particularly useful for multi-party computation (MPC) protocols.
Message Flow
adapts
implements
sends/receives
transforms to
consumed by
uses
manages
blueprint-networking-round-based-extension
NetworkServiceHandle
round-based::Delivery trait
P2P Messages
Round Messages
Protocol Implementation
RoundsRouter
Round 1
Round 2
The extension provides an adapter (RoundBasedNetworkAdapter) that connects Blueprint's networking layer to the round-based protocol framework:
Protocol Messages
Protocol Implementation
add_round
broadcast
Protocol Execution
Generate local_randomness
Compute commitment = Sha256::digest(local_randomness)
Send CommitMsg
Receive all commitments
Send DecommitMsg
Receive all reveals
Verify commitments match revealed values
Combine values with XOR
RoundsRouter
RoundInput
CommitMsg { commitment: Output }
DecommitMsg { randomness: [u8; 32] }
To use the Round-Based Extension:
- Create a
RoundBasedNetworkAdapterconnecting to your Blueprint network. - Create an
MpcPartyusing the adapter. - Define your protocol using the
round-basedframework. - Execute the protocol with the MPC party.
Example from the test file:
// Create the adapter connecting Blueprint networking to round-based
let node_network = RoundBasedNetworkAdapter::new(
network_handle, // NetworkServiceHandle from Blueprint
participant_id, // Local participant index (0, 1, etc.)
&participants, // Mapping from participant IDs to peer IDs
instance_id // Protocol instance identifier
);
// Create an MPC party using the adapter
let mpc_party = MpcParty::connected(node_network);
// Run the protocol
let result = protocol_of_random_generation(
mpc_party,
participant_id,
num_participants,
rng
).await;
The test file rand_protocol.rs demonstrates a two-round randomness generation protocol implemented using the Round-Based Extension:
- Round 1 (Commit): Each party generates a random value and broadcasts a commitment (hash).
- Round 2 (Reveal): Each party reveals their random value.
- Output: All parties verify and combine the revealed values to produce shared randomness.
Key Functions:
create(network_handle, participant_id, participants, instance_id)send(message)send(peer_id, serialized_message)transmit messagessend responsereceive(message)deliver(round_message)
Sources: 265-267
No relevant content available.
No significant content available.
- Overview: Overview
- Architecture Overview: Architecture Overview
- Key Concepts: Key Concepts
- Protocol Support: Protocol Support
- Getting Started: Getting Started
- Installation: Installation
- Creating Your First Blueprint: Creating Your First Blueprint
- Example Blueprints: Example Blueprints
- Blueprint SDK: Blueprint SDK
- Core Components: Core Components
- Job System: Job System
- Router: Router
- Networking: Networking
- Keystore: Keystore
- Blueprint Runner: Blueprint Runner
- Runner Configuration: Runner Configuration
- Job Execution Flow: Job Execution Flow
- Blueprint Manager: Blueprint Manager
- Event Handling: Event Handling
- Blueprint Sources: Blueprint Sources
- CLI Reference: CLI Reference
- Blueprint Commands: Blueprint Commands
- Key Management Commands: Key Management Commands
- Deployment Options: Deployment Options
- Development: Development
- Build Environment: Build Environment
- Testing Framework: Testing Framework
- CI/CD: CI/CD
- Advanced Topics: Advanced Topics
- Networking Extensions: Networking Extensions
- Macro System: Macro System
- Custom Protocol Integration: Custom Protocol Integration
Sources: CLI Implementation Code
This page documents the key management commands available in the cargo-tangle CLI tool. These commands provide capabilities for generating, importing, exporting, and managing cryptographic keys used across the Tangle Blueprint framework.
cargo-tangle key
generate (g)
import (i)
export (e)
list (l)
generate-mnemonic (m)
- Generate Key:
- Command:
generate - Parameters: Key Type, Seed, Output Path
- Command:
- Import Key:
- Command:
import - Parameters: Key Type, Secret, Keystore Path
- Command:
- Export Key:
- Command:
export - Parameters: Key Type, Public Key, Keystore Path
- Command:
- List Keys:
- Command:
list - Parameters: Keystore Path
- Command:
- Generate Mnemonic:
- Command:
generate-mnemonic - Parameters: Word Count
- Command:
- nextest.toml
- README.md
- CLI README
- Create Command
- Keys Command
- Eigenlayer Command
- Forge
- Main
- Keystore Error
- Keystore Config
- Keystore Module
- Storage Module
- Substrate Storage
- Aggregator Selection
Command:
cargo tangle key generate [OPTIONS]
Options:
-t, --key-type <KEY_TYPE>: The type of key to generate (required)-o, --output <OUTPUT>: Path to save the key to (optional)--seed <SEED>: The seed to use for key generation (hex format without 0x prefix) (optional)-v, --show-secret: Show the secret key in output (optional)
Example:
cargo tangle key generate -t sr25519 -o ./my-keys --show-secret
This command generates an Sr25519 key pair, saves it to the ./my-keys directory, and displays both the public and private keys.
Command:
cargo tangle key import [OPTIONS]
cargo tangle k i [OPTIONS]
Options:
-t, --key-type <KEY_TYPE>: The type of key to import (optional)-x, --secret <SECRET>: The secret key to import (hex format without 0x prefix) (optional)-k, --keystore-path <KEYSTORE_PATH>: The path to the keystore (required)-p, --protocol <PROTOCOL>: The protocol for the key (tangle, eigenlayer) (defaults to tangle)
If no key type is provided, the command will enter an interactive mode to prompt for the key type and value.
Example:
cargo tangle key import -t ecdsa -x 1a2b3c4d... -k ./keystore -p eigenlayer
This command imports an ECDSA private key into the specified keystore path for use with Eigenlayer.
Command:
cargo tangle key export [OPTIONS]
cargo tangle k e [OPTIONS]
Command:
cargo tangle key export -t <KEY_TYPE> -p <PUBLIC> -k <KEYSTORE_PATH>
Options:
-t, --key-type <KEY_TYPE>: The type of key to export (required)-p, --public <PUBLIC>: The public key to export (hex format without 0x prefix) (required)-k, --keystore-path <KEYSTORE_PATH>: The path to the keystore (required)
Example:
cargo tangle key export -t sr25519 -p abcdef1234... -k ./keystore
This command exports the Sr25519 private key corresponding to the specified public key from the keystore.
Command:
cargo tangle key list [OPTIONS]
cargo tangle k l [OPTIONS]
Options:
-k, --keystore-path <KEYSTORE_PATH>: The path to the keystore (required)
Example:
cargo tangle key list -k ./keystore
This command lists all keys stored in the specified keystore, displaying the key type and public key for each.
Generates a new mnemonic phrase that can be used to derive keys.
Command:
cargo tangle key generate-mnemonic [OPTIONS]
cargo tangle k m [OPTIONS]
Options:
-w, --word-count <WORD_COUNT>: Number of words in the mnemonic (12, 15, 18, 21, or 24) (optional, defaults to 12)
Example:
cargo tangle key generate-mnemonic -w 24
This command generates a 24-word mnemonic phrase.
The key management commands are integrated with other CLI commands that require keys:
cargo-tangle key
generate
import
export
list
generate-mnemonic
cargo-tangle blueprint
run
deploy
submit
--keystore-path
prompt_for_keys()
When deploying a blueprint or running a service, if no keystore is found at the specified path, the CLI will prompt the user to create keys for the necessary key types.
The Blueprint framework supports multiple key types for different cryptographic requirements and blockchain protocols:
| Key Type | Description | Primarily Used For |
|---|---|---|
sr25519 |
Schnorrkel/Ristretto x25519 | Tangle Network, Substrate-based chains |
ed25519 |
Edwards-curve Digital Signature Algorithm | General-purpose signatures |
ecdsa |
Elliptic Curve Digital Signature Algorithm | Ethereum, EVM-compatible chains |
bls381 |
Boneh-Lynn-Shacham signatures (BLS12-381 curve) | Signature aggregation protocols |
bls377 |
Boneh-Lynn-Shacham signatures (BLS12-377 curve) | Signature aggregation protocols |
bn254 |
Barreto-Naehrig curve for BLS signatures | Eigenlayer and zero-knowledge proofs |
The Blueprint keystore system provides a flexible infrastructure for managing cryptographic keys with multiple storage backends.
Keystore System Architecture
Keystore
KeystoreConfig
Backend Trait
Local Storage
Remote Storage
InMemoryStorage
FileStorage
SubstrateStorage
AWS KMS
Google Cloud KMS
Ledger Hardware Wallet
RawStorage Trait
TypedStorage
Generates a new cryptographic key pair of the specified type. Command:
cargo tangle key generate [OPTIONS]
cargo tangle k g [OPTIONS]
The Blueprint keystore system supports multiple storage backends for keys:
Blueprint Keystore Storage Backends
Storage Operations
store_raw()
KeystoreConfig
InMemoryStorage
FileStorage
SubstrateStorage
load_secret_raw()
remove_raw()
contains_raw()
list_raw()
Configuration Example:
let config = KeystoreConfig::new().in_memory(true);
let keystore = Keystore::new(config)?;
Configuration Example:
let config = KeystoreConfig::new().fs_root("./my-keystore");
let keystore = Keystore::new(config)?;
Configuration Example:
let keystore = sc_keystore::LocalKeystore::in_memory();
let config = KeystoreConfig::new().substrate(Arc::new(keystore));
let keystore = Keystore::new(config)?;
- Separate Keystores: Use different keystores for different environments (development, testing, production).
- Backup Private Keys: Always keep secure backups of your private keys or mnemonic phrases.
- Key Rotation: Periodically generate new keys for improved security.
- Environment-Specific Keys: Use different keys for different networks/protocols (Tangle vs. Eigenlayer).
- Secure Storage: For production deployments, use secure key storage options rather than plain file storage.
The key management commands handle various error scenarios:
| Error | Description |
|---|---|
KeyTypeNotSupported |
The specified key type is not supported |
KeyNotFound |
The key was not found in the keystore |
InvalidKeyFormat |
The key format is invalid |
InvalidSeed |
The seed provided is invalid |
StorageNotSupported |
The storage backend is not supported |
KeystoreOperationNotSupported |
The operation is not supported by the keystore |
A typical workflow for using key management commands might look like:
- Generate a mnemonic phrase:
cargo tangle key generate-mnemonic -w 24
- Generate keys for different protocols:
cargo tangle key generate -t sr25519 -o ./tangle-keystore
cargo tangle key generate -t ecdsa -o ./eigenlayer-keystore
- List available keys:
cargo tangle key list -k ./tangle-keystore
4. Deploy a blueprint using the generated keys:
cargo tangle blueprint deploy tangle --keystore-path ./tangle-keystore
5. Run a service with the keystore:
cargo tangle blueprint run -p tangle --keystore-path ./tangle-keystore
- Overview: Overview
- Architecture Overview: Architecture Overview
- Key Concepts: Key Concepts
- Protocol Support: Protocol Support
- Installation: Installation
- Creating Your First Blueprint: Creating Your First Blueprint
- Example Blueprints: Example Blueprints
- Blueprint SDK: Blueprint SDK
- Core Components: Core Components
- Job System: Job System
- Router: Router
- Networking: Networking
- Keystore: Keystore
- Blueprint Runner: Blueprint Runner
- Runner Configuration: Runner Configuration
- Job Execution Flow: Job Execution Flow
- Blueprint Manager: Blueprint Manager
- Event Handling: Event Handling
- Blueprint Sources: Blueprint Sources
- CLI Reference: CLI Reference
- Blueprint Commands: Blueprint Commands
- Key Management Commands: Key Management Commands
- Deployment Options: Deployment Options
- Development: Development
- Build Environment: Build Environment
- Testing Framework: Testing Framework
- CI/CD: CI/CD
- Advanced Topics: Advanced Topics
- Networking Extensions: Networking Extensions
- Macro System: Macro System
- Custom Protocol Integration: Custom Protocol Integration
The Tangle Blueprint project uses Nix flakes for reproducible development environments.
Before setting up the development environment, you'll need:
- Git
- Nix package manager with flakes enabled (recommended)
- Alternatively: Rust toolchain, Foundry, and other dependencies listed in the flake.nix file
To set up using Nix:
- Clone the repository
- Run
nix developin the project root - All required tools will be available in your shell environment
The development environment is defined in the flake.nix file, which configures all required dependencies including:
- Rust toolchain (defined in rust-toolchain.toml)
- Foundry for Ethereum development
- Required system libraries (OpenSSL, GMP, Protobuf)
- Cargo tools for testing and development
If you prefer not to use Nix, you can manually install the required dependencies:
- Install Rust using rustup
- Install Foundry for Ethereum development
- Install system libraries:
- OpenSSL development packages
- GMP development packages
- Protobuf compiler
- Install additional Cargo tools:
- cargo-nextest
- cargo-expand
- cargo-dist
The Tangle Blueprint project is organized as a Cargo workspace with multiple packages.
To build the entire project:
cargo build
To build a specific package:
cargo build -p <package-name>
To build with optimizations for release:
cargo build --release
The Blueprint project uses a comprehensive testing framework with support for both unit and integration tests.
The project uses cargo-nextest for efficient test execution.
To run tests for the entire project:
cargo nextest run
To run tests for a specific package:
cargo nextest run -p <package-name>
For documentation tests (which aren't yet supported by nextest):
cargo test --doc
The project defines custom test profiles in .config/nextest.toml:
ci- Used in continuous integration, runs tests in parallel where possible.serial- Used for tests that cannot run in parallel (networking tests, etc.).
The project uses GitHub Actions for continuous integration, ensuring code quality and test coverage for all pull requests and the main branch.
The CI workflow is defined in .github/workflows/ci.yml and includes the following jobs:
- Formatting: Checks code formatting using
rustfmt - Linting: Runs
clippyto check for code quality issues - Matrix Testing: Dynamically generates a matrix of workspace packages and runs tests for each package
The CI uses a matrix testing approach where:
- The workflow dynamically generates a list of all packages in the workspace
- Tests are run for each package individually
- Certain packages are identified as requiring serial test execution
- The appropriate nextest profile (
ciorserial) is selected based on the package
This approach ensures efficient test execution while respecting the constraints of packages that cannot run tests in parallel.
When contributing to the Blueprint project, follow these best practices:
- Ensure code passes all CI checks before merging:
- Run
cargo +nightly fmtto format code - Run
cargo clippyto check for common issues - Run all tests with
cargo nextest run
- Run
- Write comprehensive tests for new features:
- Unit tests for individual components
- Integration tests for inter-component interactions
- Documentation tests for public API examples
- Update documentation when making significant changes:
- Update README files as needed
- Add documentation comments to public APIs
- Consider updating this wiki with new information
Development Process
Clone Repository
Setup Dev Environment
Create Feature Branch
Implement Feature
Run Tests Locally
Create Pull Request
CI Checks Run
Code Review
Merge to Main
| Issue | Solution |
|---|---|
| Linker errors | Ensure required system libraries are installed (OpenSSL, GMP, Protobuf) |
| Test failures in CI but not locally | Ensure tests work in isolation and don't depend on environment-specific state |
| Formatting errors in CI | Run cargo +nightly fmt locally before pushing |
| Slow builds | Consider enabling faster linkers like mold (Linux) or using the Nix environment |
| Dependencies issues | Update lockfiles with cargo update or delete target directory and rebuild |
| Auto-refresh not enabled yet |
- Ask Devin about tangle-network/blueprint
- Deep Research
- Overview: Overview
- Architecture Overview: Architecture Overview
- Key Concepts: Key Concepts
- Protocol Support: Protocol Support
- Installation: Installation
- Creating Your First Blueprint: Creating Your First Blueprint
- Example Blueprints: Example Blueprints
- Blueprint SDK: Blueprint SDK
- Core Components: Core Components
- Job System: Job System
- Router: Router
- Networking: Networking
- Keystore: Keystore
- Blueprint Runner: Blueprint Runner
- Runner Configuration: Runner Configuration
- Job Execution Flow: Job Execution Flow
- Blueprint Manager: Blueprint Manager
- Event Handling: Event Handling
- Blueprint Sources: Blueprint Sources
- CLI Reference: CLI Reference
- Blueprint Commands: Blueprint Commands
- Key Management Commands: Key Management Commands
- Deployment Options: Deployment Options
- Development: Development
- Build Environment: Build Environment
- Testing Framework: Testing Framework
- CI/CD: CI/CD
- Advanced Topics: Advanced Topics
- Networking Extensions: Networking Extensions
- Macro System: Macro System
- Custom Protocol Integration: Custom Protocol Integration
You can install the Tangle CLI in two ways:
Install the latest stable version of cargo-tangle using the installation script:
curl --proto '=https' --tlsv1.2 -LsSf https://github.com/tangle-network/gadget/releases/download/cargo-tangle/v0.1.1-beta.7/cargo-tangle-installer.sh | shInstall the latest git version of cargo-tangle using the following command:
cargo install cargo-tangle --git https://github.com/tangle-network/gadget --forceThe cargo-tangle command-line interface is used to create, deploy, and manage blueprints on the Tangle Network. It supports various commands for blueprint management and key handling.
-
Blueprint Commands:
- Create (c)
- Deploy (d)
- Run (r)
- ListRequests (ls)
- ListBlueprints (lb)
- Register (reg)
- AcceptRequest (accept)
- RejectRequest (reject)
- RequestService (req)
- SubmitJob (submit)
- DeployMBSM (mbsm)
- Generate (g)
- Import (i)
- Export (e)
- List (l)
-
Key Commands:
- GenerateMnemonic (m)
- generate_key()
- import_key()
- export_key()
- list_keys()
-
BlueprintCommands Enum:
- create_new_blueprint()
- deploy_tangle()
- deploy_eigenlayer()
- run_blueprint()
- list_requests()
-
KeyCommands Enum:
- generate_mnemonic()
Creates a new blueprint project from a template.
Usage:
cargo tangle blueprint create --name <NAME> [OPTIONS]
Aliases: cargo tangle bp c
Options:
--name, -n <NAME>: The name of the blueprint (required)--source: Specify the source template (optional)--blueprint-type: Specify the type of blueprint to create (optional)
Example:
cargo tangle blueprint create --name my_blueprint
Deploys a blueprint to either the Tangle Network or Eigenlayer.
Usage:
cargo tangle blueprint deploy <TARGET> [OPTIONS]
Aliases: cargo tangle bp d
Targets:
-
Tangle
cargo tangle blueprint deploy tangle [OPTIONS]Options:
--http-rpc-url <URL>: HTTP RPC URL (default: https://rpc.tangle.tools)--ws-rpc-url <URL>: WebSocket RPC URL (default: wss://rpc.tangle.tools)--package, -p <PACKAGE>: The package to deploy--devnet: Start a local devnet using a Tangle test node--keystore-path, -k <PATH>: The keystore path (defaults to ./keystore)
-
Eigenlayer
cargo tangle blueprint deploy eigenlayer [OPTIONS]Options:
--ws-rpc-url <URL>: The RPC endpoint to connect to (default: ws://127.0.0.1:9944)--service-id <ID>: The service ID to submit the job to--blueprint-id <ID>: The blueprint ID to submit the job to--keystore-uri <URI>: The keystore URI to use--job <JOB>: The job ID to submit--params-file <FILE>: Optional path to a JSON file containing job parameters--watcher: Whether to wait for the job to complete
Example:
cargo tangle blueprint submit --blueprint-id 42 --job 1 --keystore-uri ./keystore
Deploys a Master Blueprint Service Manager (MBSM) contract to the Tangle Network.
Usage:
cargo tangle blueprint deploy-mbsm [OPTIONS]
Options:
--rpc-url <URL>: HTTP RPC URL--contracts-path <PATH>: Path to the contracts--ordered-deployment: Deploy contracts in an interactive ordered manner--network, -w <NETWORK>: Network to deploy to (local, testnet, mainnet; default: local)--devnet: Start a local devnet using Anvil (only valid with network=local)--keystore-path, -k <PATH>: The keystore path (defaults to ./keystore)
Example:
cargo tangle blueprint deploy tangle --devnet --package my_blueprint
Runs a blueprint gadget, connecting to a specified protocol and network. Usage:
cargo tangle blueprint run --protocol <PROTOCOL> [OPTIONS]
Aliases: cargo tangle bp r
Options:
--protocol, -p <PROTOCOL>: The protocol to run (eigenlayer or tangle)--rpc-url, -u <URL>: The HTTP RPC endpoint URL (default: http://127.0.0.1:9944)--keystore-path, -k <PATH>: The keystore path (defaults to ./keystore)--binary-path, -b <PATH>: The path to the binary--network, -w <NETWORK>: The network to connect to (local, testnet, mainnet; default: local)--data-dir, -d <PATH>: The data directory path (defaults to ./data)--bootnodes, -n <BOOTNODES>: Optional bootnodes to connect to--settings-file, -f <FILE>: Path to the protocol settings env file (default: ./settings.env)--podman-host, -p <URL>: The Podman host to use for containerized blueprints
Example:
cargo tangle blueprint run --protocol tangle --rpc-url http://127.0.0.1:9944
cargo tangle blueprint create --name my_blueprintcd my_blueprintcargo buildcargo tangle blueprint deploy tangle --devnetcargo tangle blueprint list-blueprintscargo tangle blueprint register --blueprint-id 0cargo tangle blueprint request-service --blueprint-id 0 --value 1000000cargo tangle blueprint list-requestscargo tangle blueprint accept-request --request-id 0cargo tangle blueprint run --protocol tanglecargo tangle blueprint submit --blueprint-id 0 --service-id 0 --job 1Lists service requests for a Tangle blueprint. Usage:
cargo tangle blueprint list-requests [OPTIONS]
Aliases: cargo tangle bp ls
Options:
--ws-rpc-url <URL>: WebSocket RPC URL to use (default: ws://127.0.0.1:9944)
Example:
cargo tangle blueprint list-requests --ws-rpc-url wss://rpc.tangle.tools
Lists blueprints on the target Tangle network. Usage:
cargo tangle blueprint list-blueprints [OPTIONS]
Aliases: cargo tangle bp lb
Options:
--ws-rpc-url <URL>: WebSocket RPC URL to use (default: ws://127.0.0.1:9944)
Rejects a service request. Usage:
cargo tangle blueprint reject-request [OPTIONS]
Aliases: cargo tangle bp reject
Options:
--ws-rpc-url <URL>: WebSocket RPC URL to use (default: ws://127.0.0.1:9944)--keystore-uri <URI>: The keystore URI to use (default: ./keystore)--request-id <ID>: The request ID to respond to
Example:
cargo tangle blueprint reject-request --request-id 123
Requests a Tangle service. Usage:
cargo tangle blueprint request-service [OPTIONS]
Aliases: cargo tangle bp req
Options:
--ws-rpc-url <URL>: WebSocket RPC URL to use (default: ws://127.0.0.1:9944)--blueprint-id <ID>: The blueprint ID to request--min-exposure-percent <PERCENT>: The minimum exposure percentage (default: 50)--max-exposure-percent <PERCENT>: The maximum exposure percentage (default: 80)--target-operators <OPERATORS>: The target operators to request--value <VALUE>: The value to request--keystore-uri <URI>: The keystore URI to use (default: ./keystore)
Example:
cargo tangle blueprint request-service --blueprint-id 42 --value 1000000
Submits a job to a service. Usage:
cargo tangle blueprint submit [OPTIONS]
When running a blueprint with the --settings-file option, the file should be in the .env format and contain protocol-specific settings:
BLUEPRINT_ID=42
SERVICE_ID=123
ALLOCATION_MANAGER_ADDRESS=0x1234...
REGISTRY_COORDINATOR_ADDRESS=0x5678...
OPERATOR_STATE_RETRIEVER_ADDRESS=0x9abc...
DELEGATION_MANAGER_ADDRESS=0xdef0...
SERVICE_MANAGER_ADDRESS=0x1122...
STAKE_REGISTRY_ADDRESS=0x3344...
STRATEGY_MANAGER_ADDRESS=0x5566...
STRATEGY_ADDRESS=0x7788...
AVS_DIRECTORY_ADDRESS=0x99aa...
REWARDS_COORDINATOR_ADDRESS=0xbbcc...
PERMISSION_CONTROLLER_ADDRESS=0xddee...
Registers for a Tangle blueprint.
Usage:
cargo tangle blueprint register [OPTIONS]
Aliases: cargo tangle bp reg
Options:
--ws-rpc-url <URL>: WebSocket RPC URL to use (default: ws://127.0.0.1:9944)--blueprint-id <ID>: The blueprint ID to register--keystore-uri <URI>: The keystore URI to use (default: ./keystore)
Example:
cargo tangle blueprint register --blueprint-id 42 --ws-rpc-url wss://rpc.tangle.tools
Accepts a Tangle service request.
Usage:
cargo tangle blueprint accept-request [OPTIONS]
Aliases: cargo tangle bp mbsm
Options:
--http-rpc-url <URL>: The HTTP RPC URL to use (default: http://127.0.0.1:9944)--force, -f: Force deployment even if the contract is already deployed
Example:
cargo tangle blueprint deploy-mbsm --http-rpc-url https://rpc.tangle.tools
Key management commands handle cryptographic key operations such as generation, import, export, and listing.
Generates a new cryptographic key.
Usage:
cargo tangle key generate [OPTIONS]
Aliases: cargo tangle k g
Options:
--key-type, -t <TYPE>: The type of key to generate (sr25519, ed25519, ecdsa, bls381, bls377, bn254)--output, -o <PATH>: The path to save the key to--seed <SEED>: The seed to use for key generation (hex format without 0x prefix)--show-secret, -v: Show the secret key in output
Example:
cargo tangle key generate --key-type sr25519 --output ./my-keystore --show-secretImports a key into the keystore.
Usage:
cargo tangle key import [OPTIONS]- Generate a new keystore with multiple key types:
mkdir -p ./keystore
cargo tangle key generate --key-type sr25519 --output ./keystore
cargo tangle key generate --key-type ecdsa --output ./keystore- List all keys in the keystore:
cargo tangle key list --keystore-path ./keystore- Generate a mnemonic for backup:
cargo tangle key generate-mnemonic --word-count 24- Export a key:
cargo tangle key export --key-type sr25519 --public <PUBLIC_KEY> --keystore-path ./keystoreAliases: cargo tangle k i
Options:
--key-type, -t <TYPE>: Type of key to import (sr25519, ed25519, ecdsa, bls381, bls377, bn254)--secret, -x <SECRET>: Secret key to import (hex format without 0x prefix)--keystore-path, -k <PATH>: Path to the keystore--protocol, -p <PROTOCOL>: Protocol for generating keys (Eigenlayer or Tangle; default: tangle)
Example:
cargo tangle key import --key-type sr25519 --secret abcdef1234567890 --keystore-path ./keystoreExport:
Exports a key from the keystore.
Usage:
cargo tangle key export [OPTIONS]Aliases: cargo tangle k e
Options:
--key-type, -t <TYPE>: The type of key to export (sr25519, ed25519, ecdsa, bls381, bls377, bn254)--public, -p <PUBLIC>: The public key to export (hex format without 0x prefix)--keystore-path, -k <PATH>: The path to the keystore
Example:
cargo tangle key export --key-type sr25519 --public 0123456789abcdef --keystore-path ./keystore
Aliases: cargo tangle k l
Options:
--keystore-path, -k <PATH>: The path to the keystore
Example:
cargo tangle key list --keystore-path ./keystore
Generates a new mnemonic phrase.
Usage:
cargo tangle key generate-mnemonic [OPTIONS]
Aliases: cargo tangle k m
Options:
--word-count, -w <COUNT>: Number of words in the mnemonic (12, 15, 18, 21, or 24)
Example:
cargo tangle key generate-mnemonic --word-count 24
Aliases: cargo tangle bp accept
Options:
--ws-rpc-url <URL>: WebSocket RPC URL to use (default: ws://127.0.0.1:9944)--min-exposure-percent <PERCENT>: The minimum exposure percentage (default: 50)--max-exposure-percent <PERCENT>: The maximum exposure percentage (default: 80)--keystore-uri <URI>: The keystore URI to use (default: ./keystore)--restaking-percent <PERCENT>: The restaking percentage (default: 50)--request-id <ID>: The request ID to respond to
Example:
cargo tangle blueprint accept-request --request-id 123 --min-exposure-percent 60
Usage:
cargo tangle blueprint reject-request [OPTIONS]
Several CLI commands support configuration through environment variables:
| Environment Variable | Used By | Description |
|---|---|---|
WS_RPC_URL |
List commands, Register, Request/Accept/Reject | WebSocket RPC URL (default: ws://127.0.0.1:9944) |
HTTP_RPC_URL |
Deploy MBSM | HTTP RPC URL (default: http://127.0.0.1:9944) |
KEYSTORE_URI |
Register, Accept/Reject, Request | Keystore URI (default: ./keystore) |
SIGNER |
Deploy | SURI of the signer account (optional) |
EVM_SIGNER |
Deploy | SURI of the EVM signer account (optional) |
PODMAN_HOST |
Run | Podman host to use for containerized blueprints |
- Ask Devin about tangle-network/blueprint
- Conduct deep research
No substantial content available. Please refer to Devin for detailed information on the Tangle Network's Blueprint Framework.
- Overview: Overview
- Architecture Overview: Architecture Overview
- Key Concepts: Key Concepts
- Protocol Support: Protocol Support
- Getting Started: Getting Started
- Installation: Installation
- Creating Your First Blueprint: Creating Your First Blueprint
- Example Blueprints: Example Blueprints
- Blueprint SDK: Blueprint SDK
- Core Components: Core Components
- Job System: Job System
- Router: Router
- Networking: Networking
- Keystore: Keystore
- Blueprint Runner: Blueprint Runner
- Runner Configuration: Runner Configuration
- Job Execution Flow: Job Execution Flow
- Blueprint Manager: Blueprint Manager
- Event Handling: Event Handling
- Blueprint Sources: Blueprint Sources
- CLI Reference: CLI Reference
- Blueprint Commands: Blueprint Commands
- Key Management Commands: Key Management Commands
- Deployment Options: Deployment Options
- Development: Development
- Build Environment: Build Environment
- Testing Framework: Testing Framework
- CI/CD: CI/CD
- Advanced Topics: Advanced Topics
- Networking Extensions: Networking Extensions
- Macro System: Macro System
- Custom Protocol Integration: Custom Protocol Integration
- Core Testing Components: Understand the essential parts of the testing framework.
- TestEnv Trait: Defines the environment for running tests.
- TestRunner: The main component responsible for executing tests.
- Protocol-Specific Test Environments: Tailored environments for different protocols.
- TangleTestEnv: A specific test environment for Tangle.
- EigenlayerBLSTestEnv: Test environment for Eigenlayer BLST.
- Tangle Test Harness: Framework for managing and running tests.
- TangleTestHarness API: API documentation for interacting with the test harness.
-
Single-Node Test:
def test_single_node(): # Setup code ... # Test execution ...
-
Multi-Node Test:
def test_multi_node(): # Setup multiple nodes ... # Test execution across nodes ...
- Integrate testing framework with Continuous Integration systems to automate testing.
- The testing framework provides a comprehensive suite for testing applications, ensuring reliability and performance.
This document details the comprehensive testing framework for the Tangle Blueprint project. The framework provides utilities for testing blueprints across various blockchain environments with a focus on Tangle Network and Eigenlayer. It enables developers to validate blueprint implementations through controlled test environments that simulate real-world blockchain interactions.
The Blueprint testing framework follows a modular design pattern, allowing developers to test their blueprints across different blockchain environments with a consistent API.
The testing framework is built around several core components that provide a consistent way to test blueprints across different blockchain environments:
- TestEnv trait
- TestRunner
- TangleTestEnv
- EigenlayerBLSTestEnv
- TangleTestHarness
- MultiNodeTestEnv
- NodeHandle
- ci.yml
- run.rs (CLI tests)
- transactions.rs
- core runner.rs
- eigenlayer runner.rs
- tangle harness.rs
- tangle lib.rs
- multi_node.rs
- tangle runner.rs
The TestRunner is responsible for executing jobs in a controlled environment. It takes a router with configured jobs and executes them with the provided context.
pub struct TestRunner<Ctx> {
router: Option<Router<Ctx>>,
job_index: usize,
pub builder: Option<BlueprintRunnerBuilder<Pending<()>>>,
_phantom: core::marker::PhantomData<Ctx>,
}The TestRunner provides methods to:
- Add jobs to be executed
- Add background services
- Run the jobs with a given context
Each node in the MultiNodeTestEnv is represented by a NodeHandle that provides control over individual nodes:
pub struct NodeHandle<Ctx> {
pub node_id: usize,
pub addr: Multiaddr,
pub port: u16,
pub client: TangleClient,
pub signer: TanglePairSigner<sp_core::sr25519::Pair>,
state: Arc<RwLock<NodeState>>,
command_tx: mpsc::Sender<NodeCommand>,
pub test_env: Arc<RwLock<TangleTestEnv<Ctx>>>,
}The testing framework provides utilities for executing common transactions on the Tangle network:
// Transaction Functions
deploy_new_mbsm_revision()
create_blueprint()
join_operators()
register_for_blueprint()
submit_job()
request_service()
approve_service()
wait_for_completion_of_tangle_job()Sources: 529-690
A typical test workflow using the TangleTestHarness involves:
- Setting up the test environment
- Deploying a blueprint
- Setting up operators and services
- Submitting a job
- Waiting for job execution
- Verifying the results
// Initialize the test harness
let harness = TangleTestHarness::<MyContext>::setup(temp_dir).await?;
// Deploy the blueprint
let blueprint_id = harness.deploy_blueprint().await?;
// Setup services with operators
let (mut test_env, service_id, _) = harness.setup_services::<1>(false).await?;
// Initialize the test environment
test_env.initialize().await?;
// Add a job to be executed
test_env.add_job(my_job).await;
// Start the test environment with context
test_env.start(my_context).await?;
// Submit a job to the service
let job = harness.submit_job(service_id, job_id, inputs).await?;
// Wait for job execution
let results = harness.wait_for_job_execution(service_id, job).await?;
// Verify the job resultsThe testing framework is designed for CI environments. The CI configuration in .github/workflows/ci.yml defines the test execution process:
- Checks code formatting
- Runs linting checks
- Generates a matrix of crates to test
- Runs tests for each crate
Special handling for some crates in the CI configuration:
SERIAL_CRATES=("blueprint-tangle-testing-utils" "blueprint-client-evm" "blueprint-tangle-extra" "blueprint-networking" "cargo-tangle")These functions facilitate:
- Deploying the Master Blueprint Service Manager (MBSM)
- Creating new blueprints
- Joining the operator set
- Registering for blueprints
- Submitting jobs to services
- Requesting and approving services
- Waiting for job completion
The TestEnv trait defines the common interface that all test environments must implement, enabling consistent testing patterns regardless of the underlying blockchain protocol.
«trait»
TestEnv
+type Config
+type Context
+new(config, env) : Result<Self, Error>
+add_job(job)
+add_background_service(service)
+get_gadget_config() : BlueprintEnvironment
+run_runner(context) Future<Result>() : , Error<>+runner: Option<TestRunner<Ctx>>
+config: TangleConfig
+env: BlueprintEnvironment
+runner_handle: Mutex<Option<JoinHandle>>
+update_networking_config(bootnodes, port)
+set_tangle_producer_consumer()+runner: Option<TestRunner<Ctx>>
+config: EigenlayerBLSConfig
+env: BlueprintEnvironment
+runner_handle: Mutex<Option<JoinHandle>>TangleTestEnv is the test environment for Tangle Network blueprints. It implements the TestEnv trait and provides Tangle-specific functionality.
pub struct TangleTestEnv<Ctx> {
pub runner: Option<TestRunner<Ctx>>,
pub config: TangleConfig,
pub env: BlueprintEnvironment,
pub runner_handle: Mutex<Option<JoinHandle<Result<(), Error>>>>,
}The TangleTestEnv includes methods to:
- Update networking configuration
- Set up Tangle producer and consumer
- Run the test runner with a given context
EigenlayerBLSTestEnv is the test environment for Eigenlayer BLS blueprints. It implements the TestEnv trait and provides Eigenlayer-specific functionality.
pub struct EigenlayerBLSTestEnv<Ctx> {
pub runner: Option<TestRunner<Ctx>>,
pub config: EigenlayerBLSConfig,
pub env: BlueprintEnvironment,
pub runner_handle: Mutex<Option<JoinHandle<Result<(), Error>>>>,
}The TangleTestHarness provides a comprehensive set of utilities for testing Tangle Network blueprints. It handles the setup of test nodes, deployment of blueprints, registration of operators, creation of services, and execution of jobs.
Infrastructure
TangleTestHarness
Test Flow
setup()
deploy_blueprint()
setup_services()
submit_job()
wait_for_job_execution()
verify_job()
Tangle Node
Master Blueprint Service Manager
The TangleTestHarness provides a rich API for testing Tangle Network blueprints:
| Method | Description |
|---|---|
setup |
Sets up a test environment with a local Tangle node |
deploy_blueprint |
Deploys a blueprint to the Tangle network |
setup_services |
Sets up operators and services for testing |
submit_job |
Submits a job to be executed by the service |
wait_for_job_execution |
Waits for job execution to complete |
verify_job |
Verifies that job results match expected outputs |
Sources: TangleTestHarness Source Code
The MultiNodeTestEnv allows testing with multiple operators, enabling complex scenarios such as consensus protocols and distributed systems.
NodeHandle
add_job()
add_background_service()
start_runner()
shutdown()
MultiNodeTestEnv
initialize()
add_job()
start()
start_with_contexts()
add_node()
remove_node()
shutdown()
EnvironmentCommand
NodeCommand
The MultiNodeTestEnv provides methods for managing multiple test nodes:
| Method | Description |
|---|---|
new |
Creates a new multi-node test environment |
initialize |
Initializes the environment with the specified number of nodes |
add_job |
Adds a job to all nodes |
start |
Starts all nodes with the same context |
start_with_contexts |
Starts nodes with different contexts |
add_node |
Adds a new node to the environment |
remove_node |
Removes a node from the environment |
shutdown |
Shuts down the environment |
#[tokio::test]
async fn test_multi_node() -> Result<(), Error> {
// Set up the test harness
let temp_dir = TempDir::new()?;
let harness = TangleTestHarness::<MyContext>::setup(temp_dir).await?;
// Deploy the blueprint
let blueprint_id = harness.deploy_blueprint().await?;
// Set up services with multiple nodes
let (mut test_env, service_id, _) = harness.setup_services::<3>(false).await?;
test_env.initialize().await?;
// Add a job to test
test_env.add_job(my_test_job).await;
// Start the test environment
test_env.start(my_context).await?;
// Submit a job
let job = harness.submit_job(service_id, 0, inputs).await?;
// Wait for execution and verify results
let results = harness.wait_for_job_execution(service_id, job).await?;
harness.verify_job(&results, expected_outputs);
Ok(())
}// Add a job to all nodes
test_env.add_job(my_test_job).await;
// Create node-specific contexts
let node_handles = test_env.node_handles().await;
let contexts = node_handles.iter().enumerate().map(|(idx, _)| {
MyContext { node_index: idx }
}).collect::<Vec<_>>();
// Start the test environment with node-specific contexts
test_env.start_with_contexts(contexts).await?;
// Submit a job
let job = harness.submit_job(service_id, 0, inputs).await?;
// Wait for execution and verify results
let results = harness.wait_for_job_execution(service_id, job).await?;
harness.verify_job(&results, expected_outputs);
Ok(())- Set up the test environment using
TangleTestHarnessor the appropriate test environment for your protocol. - Deploy your blueprint using
deploy_blueprint(). - Set up operators and services using
setup_services(). - Define and add jobs to be tested.
- Run your test environment.
- Submit jobs and verify results.
#[tokio::test]
async fn test_my_blueprint() -> Result<(), Error> {
// Set up the test harness
let temp_dir = TempDir::new()?;
let harness = TangleTestHarness::<MyContext>::setup(temp_dir).await?;
// Deploy the blueprint
let blueprint_id = harness.deploy_blueprint().await?;
}The Blueprint testing framework provides a comprehensive set of tools for testing blueprints across different blockchain environments. It simplifies the complexities of setting up test nodes, deploying blueprints, and interacting with blockchain networks, allowing developers to focus on writing tests for their blueprint functionality. Key components include:
TangleTestHarnessMultiNodeTestEnv- Protocol-specific test environments
These tools enable developers to create robust test suites that validate their blueprint implementations in various scenarios.
- Overview: Overview
- Architecture Overview: Architecture Overview
- Key Concepts: Key Concepts
- Protocol Support: Protocol Support
- Installation: Installation
- Creating Your First Blueprint: Creating Your First Blueprint
- Example Blueprints: Example Blueprints
- Core Components: Core Components
- Job System: Job System
- Router: Router
- Networking: Networking
- Keystore: Keystore
- Runner Configuration: Runner Configuration
- Job Execution Flow: Job Execution Flow
- Event Handling: Event Handling
- Blueprint Sources: Blueprint Sources
- Blueprint Commands: Blueprint Commands
- Key Management Commands: Key Management Commands
- Deployment Options: Deployment Options
- Build Environment: Build Environment
- Testing Framework: Testing Framework
- CI/CD: CI/CD
- Networking Extensions: Networking Extensions
- Macro System: Macro System
- Custom Protocol Integration: Custom Protocol Integration
Refer to the following sources for detailed installation instructions:
- Ensure to follow the instructions in the linked documents for a successful installation.
- Look for code snippets and examples in the provided lines for practical guidance.
When creating a new blueprint for deployment, the system supports different template sources:
Blueprint Types Template Sources
cargo tangle blueprint create
Blueprint Templates
Blueprint Types
Default Templates
GitHub Repository
Custom Templates
Tangle Blueprint
Eigenlayer BLS
Eigenlayer ECDSA
The cargo-tangle CLI provides various deployment targets for blueprints, managed through the blueprint deploy subcommand. The two primary deployment targets are:
-
Tangle Network Deployment
- Command:
cargo tangle blueprint deploy tangle --http-rpc-url <URL> --ws-rpc-url <WS_URL> [OPTIONS]
- Command:
-
Eigenlayer Deployment
- Command:
cargo tangle blueprint deploy eigenlayer --rpc-url <URL> [OPTIONS]
- Command:
| Option | Description | Default |
|---|---|---|
--rpc-url |
HTTP RPC endpoint URL | Required (no default) |
--contracts-path |
Path to contracts directory | None (auto-detected) |
--ordered-deployment |
Deploy contracts in interactive ordered manner | false |
--network |
Target network (local/testnet/mainnet) | local |
--devnet |
Start a local Anvil instance | false |
--keystore-path |
Path to the keystore containing deployment keys | ./keystore |
| Option | Description | Default |
|---|---|---|
--http-rpc-url |
HTTP RPC endpoint URL | https://rpc.tangle.tools |
--ws-rpc-url |
WebSocket RPC endpoint URL | wss://rpc.tangle.tools |
--package |
Package to deploy (if workspace has multiple) | None (auto-detected) |
--devnet |
Start a local Tangle devnet | false |
--keystore-path |
Path to the keystore containing deployment keys | ./keystore |
To start a local Tangle testnet and deploy your blueprint:
cargo tangle blueprint deploy tangle --devnetThis command creates a keystore with test accounts and deploys your blueprint to the local network.
--data-dir: Directory for data storage--bootnodes: Optional bootnodes to connect to--settings-file: Path to the protocol settings file
Deploy Functions
protocol=tangle
protocol=eigenlayer
cargo-tangle CLI
blueprint deploy
deploy tangle
deploy eigenlayer
blueprint run
key management
deploy_tangle()
Function
deploy_eigenlayer()
Function
run_blueprint()
run_eigenlayer_avs()
The following diagram illustrates the typical deployment workflow for both Tangle and Eigenlayer:
Key Management
Tangle Network
Eigenlayer
Start Deployment
Build Project
Setup Keys
Choose Deployment Target
Deploy to Tangle
Deploy to Eigenlayer
Blueprint Deployed to Tangle
Blueprint Deployed to Eigenlayer
Generate Keys
Import Existing Keys
Store in Keystore
Proper key management is essential for blueprint deployment. The cargo-tangle CLI provides several commands for managing keys:
The Blueprint framework supports multiple key types for different protocols:
Key Type | Description | Use Case
---|---|---
Sr25519 | Substrate key type | Tangle Network operations
Ed25519 | Edwards-curve Digital Signature Algorithm | General cryptographic operations
Ecdsa | Elliptic Curve Digital Signature Algorithm | Tangle and Ethereum operations
Bls381 | BLS signatures on BLS12-381 curve | Aggregate signatures
Bls377 | BLS signatures on BLS12-377 curve | Aggregate signatures
Bn254 | BLS signatures on BN254 curve | Eigenlayer operations
Key management commands include:
cargo tangle key generate --key-type <TYPE> [OPTIONS]
cargo tangle key import --key-type <TYPE> --secret <SECRET_KEY> --keystore-path <PATH>
cargo tangle key export --key-type <TYPE> --public <PUBLIC_KEY> --keystore-path <PATH>
cargo tangle key list --keystore-path <PATH>
The Blueprint keystore system manages cryptographic keys securely across different backends:
Storage Backends
generate/import/
export/list
supported by
Key Types
Sr25519
Ed25519
Ecdsa
BLS (Bn254, Bls381, Bls377)
cargo-tangle CLI
Key Commands
Key Types
Blueprint Keystore
Storage Backends
File System Storage
In-Memory Storage
Environment variables or configuration files can be used to define protocol-specific settings for deployment:
For Tangle deployments, create a settings.env file with:
BLUEPRINT_ID=<your_blueprint_id>
SERVICE_ID=<optional_service_id>
For Eigenlayer deployments, create a settings.env file with contract addresses:
ALLOCATION_MANAGER_ADDRESS=0x...
REGISTRY_COORDINATOR_ADDRESS=0x...
OPERATOR_STATE_RETRIEVER_ADDRESS=0x...
DELEGATION_MANAGER_ADDRESS=0x...
SERVICE_MANAGER_ADDRESS=0x...
STAKE_REGISTRY_ADDRESS=0x...
STRATEGY_MANAGER_ADDRESS=0x...
STRATEGY_ADDRESS=0x...
AVS_DIRECTORY_ADDRESS=0x...
REWARDS_COORDINATOR_ADDRESS=0x...
PERMISSION_CONTROLLER_ADDRESS=0x...
After deployment, you can run your blueprint on the target network:
cargo tangle blueprint run --protocol <PROTOCOL> --rpc-url <URL> [OPTIONS]
Where:
<PROTOCOL>is eithertangleoreigenlayer<URL>is the RPC endpoint URL
-
Key Management:
- Generate and securely store keys before deployment.
- Use different keys for development and production environments.
- Regularly back up your keystore.
-
Local Testing:
- Always test deployments with the
--devnetflag before deploying to production networks. - Use the local network option for Eigenlayer testing.
- Always test deployments with the
-
Network Selection:
- Use testnet deployments before moving to mainnet.
- Configure RPC endpoints appropriately for each network.
-
Configuration Management:
- Store network-specific configurations in separate environment files.
- Include protocol-specific settings in version control with example values.
-
Continuous Deployment:
- Implement CI/CD pipelines for automated testing and deployment.
- Include deployment verification steps in your workflow.
Common deployment issues and their solutions:
| Issue | Possible Cause | Solution |
|---|---|---|
| Key not found | Missing or incorrect keystore path | Check keystore path and generate/import required keys |
| Connection error | Incorrect RPC URL | Verify URL and network connectivity |
| Contract not found (Eigenlayer) | Missing or incorrect contract addresses | Set correct addresses in settings.env file |
| Build failure | Missing dependencies | Run cargo build manually to see detailed errors |
| Unknown service ID (Tangle) | Service not properly registered | Use cargo tangle blueprint register to register the service |
If you encounter issues during deployment, use the verbose output option or check the logs for more detailed error information.
- Ask Devin about tangle-network/blueprint
- Deep Research
- Overview
- Architecture Overview
- Key Concepts
- Protocol Support
- Getting Started
- Installation
- Creating Your First Blueprint
- Example Blueprints
- Blueprint SDK
- Core Components
- Job System
- Router
- Networking
- Keystore
- Blueprint Runner
- Runner Configuration
- Job Execution Flow
- Blueprint Manager
- Event Handling
- Blueprint Sources
- CLI Reference
- Blueprint Commands
- Key Management Commands
- Deployment Options
- Development
- Build Environment
- Testing Framework
- CI/CD
- Advanced Topics
- Networking Extensions
- Macro System
- Custom Protocol Integration
The blueprint commands are part of the cargo-tangle CLI, allowing management of blueprints across various protocol environments, including Tangle Network and Eigenlayer.
To use the Blueprint commands, follow this structure:
cargo tangle blueprint <COMMAND> [OPTIONS]
or the short alias:
cargo tangle bp <COMMAND> [OPTIONS]
For key management commands, refer to Key Management Commands.
--protocol, -p <PROTOCOL>: Protocol to run (eigenlayer or tangle)--rpc-url, -u <URL>: HTTP RPC endpoint URL (default: http://127.0.0.1:9944)--keystore-path, -k <PATH>: Keystore path (defaults to ./keystore)--binary-path, -b <PATH>: Path to the AVS binary (built if not provided)--network, -w <NETWORK>: Network to connect to (local, testnet, mainnet, default: local)--data-dir, -d <PATH>: Data directory path (defaults to ./data)--bootnodes, -n <BOOTNODES>: Optional bootnodes to connect to--settings-file, -f <FILE>: Path to protocol settings env file (default: ./settings.env)--podman-host <URL>: Podman host for containerized blueprints
cargo tangle- Commands:
blueprint (bp)create (c)deploy (d)run (r)list-requests (ls)list-blueprints (lb)register (reg)accept-request (accept)reject-request (reject)request-service (req)submit-job (submit)deploy-mbsm (mbsm)
These commands help you create, deploy, and run blueprints on different networks.
Creates a new blueprint project with the specified name and type.
cargo tangle blueprint create --name <NAME> [--source <SOURCE>] [--blueprint-type <TYPE>]
Options:
--name, -n <NAME>: Name of the blueprint (required)--source <SOURCE>: Optional template source (defaults to official template)--blueprint-type <TYPE>: Type of blueprint to create (Tangle, EigenlayerBLS, EigenlayerECDSA)
Sources: 137-331
Runs a blueprint on the specified protocol.
cargo tangle blueprint run [OPTIONS]
- Run Blueprint (as operator):
cargo tangle blueprint run --protocol tangle
- Submit Job (as user):
cargo tangle blueprint submit --blueprint-id 0 --service-id 0 --job 1
Accepts a service request.
cargo tangle blueprint accept-request [OPTIONS]
Options:
--ws-rpc-url <URL>: WebSocket RPC URL (default: ws://127.0.0.1:9944)--blueprint-id <ID>: Blueprint ID to request--min-exposure-percent <PERCENT>: Minimum exposure percentage (default: 50)--max-exposure-percent <PERCENT>: Maximum exposure percentage (default: 80)--target-operators <OPERATORS>: Target operators to request--value <VALUE>: Value to request--keystore-uri <URI>: Keystore URI to use (default: ./keystore)
Sources: 268-286
Lists all blueprints on the target Tangle network.
cargo tangle blueprint list-blueprints [--ws-rpc-url <URL>]
Options:
--ws-rpc-url <URL>: WebSocket RPC URL (default: ws://127.0.0.1:9944)
Lists all service requests for a Tangle blueprint.
cargo tangle blueprint list-requests [--ws-rpc-url <URL>]
Options:
--ws-rpc-url <URL>: WebSocket RPC URL (default: ws://127.0.0.1:9944)
Deploys a Master Blueprint Service Manager (MBSM) contract to the Tangle Network.
cargo tangle blueprint deploy-mbsm [OPTIONS]
Options:
--http-rpc-url <URL>: HTTP RPC URL (default: http://127.0.0.1:9944)--force, -f: Force deployment even if the contract is already deployed
Deploys a blueprint to the target network (Tangle or Eigenlayer).
cargo tangle blueprint deploy <TARGET> [OPTIONS]
cargo tangle blueprint deploy tangle [OPTIONS]
Options:
--http-rpc-url <URL>: HTTP RPC URL (default: https://rpc.tangle.tools)--ws-rpc-url <URL>: WebSocket RPC URL (default: wss://rpc.tangle.tools)--package, -p <PACKAGE>: Package to deploy (if workspace has multiple packages)--devnet: Start a local devnet using a Tangle test node--keystore-path, -k <PATH>: Path to the keystore (default: ./keystore)
cargo tangle blueprint deploy eigenlayer [OPTIONS]
Options:
--rpc-url <URL>: RPC URL for Eigenlayer deployment--contracts-path <PATH>: Path to the contracts--ordered-deployment: Deploy contracts in an interactive ordered manner--network, -w <NETWORK>: Network to deploy to (local, testnet, mainnet, default: local)--devnet: Start a local devnet using Anvil (only valid with network=local)--keystore-path, -k <PATH>: Path to the keystore (default: ./keystore)
These commands are used to manage blueprint services on the Tangle Network.
- Deploy Blueprint
- Register as Operator
- Request Service
- Accept Request
- Reject Request
- Submit Job
Registers an account as an operator for a blueprint.
cargo tangle blueprint register --ws-rpc-url <URL> --blueprint-id <ID> --keystore-uri <URI>
Options:
--ws-rpc-url <URL>: WebSocket RPC URL (default: ws://127.0.0.1:9944)--blueprint-id <ID>: Blueprint ID to register for--keystore-uri <URI>: Keystore URI to use (default: ./keystore)
Requests a blueprint service from operators.
cargo tangle blueprint request-service [OPTIONS]
Rejects a service request.
cargo tangle blueprint reject-request [OPTIONS]
Options:
--ws-rpc-url <URL>: WebSocket RPC URL (default: ws://127.0.0.1:9944)--keystore-uri <URI>: Keystore URI to use (default: ./keystore)--request-id <ID>: Request ID to respond to
Submits a job to a blueprint service.
cargo tangle blueprint submit [OPTIONS]
Options:
--ws-rpc-url <URL>: WebSocket RPC URL (default: ws://127.0.0.1:9944)--service-id <ID>: Service ID to submit the job to--blueprint-id <ID>: Blueprint ID to submit the job to--keystore-uri <URI>: Keystore URI to use--job <JOB_ID>: Job ID to submit--params-file <FILE>: Optional path to JSON file containing job parameters--watcher: Whether to wait for the job to complete
Job Submission Flow:
Job ID + Parameters
Route Match
Process Job
Return to Client
Client
Blueprint Router
Job Handler
Job Result
These commands provide information about blueprints and service requests.
The following environment variables can be used with blueprint commands:
| Variable | Description | Default |
|---|---|---|
WS_RPC_URL |
WebSocket RPC URL for Tangle | ws://127.0.0.1:9944 |
HTTP_RPC_URL |
HTTP RPC URL for Tangle | http://127.0.0.1:9944 |
KEYSTORE_URI |
Path to keystore | ./keystore |
PODMAN_HOST |
Podman host for containerized blueprints | unix:///var/run/docker.sock |
NAME |
Name for blueprint creation | - |
BLUEPRINT_ID |
Blueprint ID for settings file | - |
SERVICE_ID |
Service ID for settings file | - |
Protocol-specific settings files can be provided when running blueprints. The format depends on the protocol:
BLUEPRINT_ID=0
SERVICE_ID=0
ALLOCATION_MANAGER_ADDRESS=0x...
REGISTRY_COORDINATOR_ADDRESS=0x...
OPERATOR_STATE_RETRIEVER_ADDRESS=0x...
DELEGATION_MANAGER_ADDRESS=0x...
SERVICE_MANAGER_ADDRESS=0x...
STAKE_REGISTRY_ADDRESS=0x...
STRATEGY_MANAGER_ADDRESS=0x...
STRATEGY_ADDRESS=0x...
AVS_DIRECTORY_ADDRESS=0x...
REWARDS_COORDINATOR_ADDRESS=0x...
PERMISSION_CONTROLLER_ADDRESS=0x...
Blueprint Command System
Information
Job Management
Service Management
Creation & Deployment
list-blueprints
cargo-tangle CLI
Blueprint Commands
create
deploy
run
register
request-service
accept-request
reject-request
submit-job
list-requests
A typical workflow for developing and deploying a blueprint follows these steps:
Create Blueprint
Build Blueprint
Test Blueprint
Deploy Blueprint
Run Blueprint
Register as Operator
Request Service
Accept Request
Submit Job
- Create Blueprint :
cargo tangle blueprint create --name my_blueprint
- Build Blueprint :
cd my_blueprint
cargo build
- Deploy Blueprint :
cargo tangle blueprint deploy tangle --ws-rpc-url wss://rpc.tangle.tools
- Register as Operator (optional):
cargo tangle blueprint register --blueprint-id 0
- Request Service (as user):
cargo tangle blueprint request-service --blueprint-id 0 --value 100
- Accept Request (as operator):
cargo tangle blueprint accept-request --request-id 0
- Ask Devin about tangle-network/blueprint
- Conduct deep research
- Documentation Links:
- Overview
- Architecture Overview
- Key Concepts
- Protocol Support
- Getting Started
- Installation
- Creating Your First Blueprint
- Example Blueprints
- Blueprint SDK
- Core Components
- Job System
- Router
- Networking
- Keystore
- Blueprint Runner
- Runner Configuration
- Job Execution Flow
- Blueprint Manager
- Event Handling
- Blueprint Sources
- CLI Reference
- Blueprint Commands
- Key Management Commands
- Deployment Options
- Development
- Build Environment
- Testing Framework
- CI/CD
- Advanced Topics
- Networking Extensions
- Macro System
- Custom Protocol Integration
The Tangle Blueprint framework uses GitHub Actions for continuous integration, automatically running checks on pull requests and commits to the main branch.
CI Pipeline
Success
Failure
Pull Request/Push
GitHub Actions
Parallel Jobs
Formatting Check
Code Linting
Test Matrix Generation
Package Tests
CI Results
Ready for Merge
Requires Fixes
The CI workflow is defined in the .github/workflows/ci.yml file and consists of several distinct jobs that run in parallel.
The CI pipeline is triggered by:
- Pull requests to the main branch
- Pushes to the main branch
- Manual workflow dispatch
PR to main
Push to main
Manual dispatch
Job "formatting"
Job "linting"
Job "generate-matrix"
Job "testing" (matrix)
Inactive
Running
ValidateFormatting
ValidateLinting
GenerateMatrix
RunTests
Complete
The formatting job ensures consistent code style across the codebase using Rust's nightly formatter.
- Runner : Ubuntu latest
- Rust Toolchain : Nightly with rustfmt
- Command :
cargo +nightly fmt -- --check
The CI/CD pipeline integrates with the broader development workflow for the Tangle Blueprint framework.
Development Workflow
Pass
Fail
Approved
Local Development
Local Formatting
Local Linting
Local Testing
Push Changes
CI Pipeline
Code Review
Fix Issues
Merge to main
- Fix Tangle Node Testing
- Integrate Doc Tests with nextest
- Enhanced Build Caching
This job dynamically generates a test matrix by analyzing the cargo workspace to identify all packages.
- Runner: Ubuntu latest
- Command: Uses
cargo metadataandjqto extract package names - Output: JSON array of package names used by the testing job
Matrix Generation
Checkout code
Run cargo metadata
Extract package names with jq
Output matrix for test job
The testing job runs unit and integration tests for each package in the workspace, using the matrix generated by the previous job.
- Runner: Ubuntu latest
- Dependencies: Foundry, Rust stable, nextest
- Matrix: Runs separate jobs for each package
- Testing Strategy:
- Determines whether to run tests in parallel or serially based on the package
- Uses nextest for faster test execution
- Also runs doc tests with the standard test runner
- Timeout: 30 minutes per package
Test Profiles
Package Testing
Most packages
Selected packages
Matrix Setup
Checkout code
Install dependencies
Determine test profile
Run nextest
Run doc tests
Parallel tests (ci profile)
Serial tests (serial profile)
The Tangle Blueprint framework uses Nix Flakes to create a reproducible development environment that closely mirrors the CI environment.
The development environment is defined in flake.nix and includes all necessary tools and dependencies for building, testing, and linting the codebase.
Development Environment
flake.nix
Development Shell
Build Dependencies
pkg-config
clang/libclang
openssl
gmp
protobuf
mold linker (Linux)
Development Tools
Rust Toolchain
Foundry
rust-analyzer
cargo-nextest
cargo-expand
cargo-dist
Node.js 22
Yarn
Sources: 1-87
- cargo-nextest: A modern test runner that provides faster test execution by running tests in parallel.
- cargo-expand: A tool for debugging and understanding macros by showing expanded code.
- cargo-dist: A tool for creating distributable packages.
- Foundry: A development environment for Ethereum smart contracts.
- rust-analyzer: A language server for Rust that provides IDE features.
Developers can run the same checks locally that are performed in CI to validate changes before pushing.
cargo +nightly fmt -- --check- Runner: Ubuntu latest
- Dependencies: Foundry, Rust stable, protobuf, libgmp
- Command:
cargo clippy --tests --examples -- -D warnings - Timeout: 120 minutes
cargo clippy --tests --examples -- -D warningscargo nextest run --package <package-name>
cargo test --package <package-name> --doc- Tangle Node Testing: The test job for the "incredible-squaring-tangle" example is currently disabled due to issues with the Tangle node in CI.
- Documentation Tests: Currently using the standard test runner for documentation tests as nextest doesn't support doc tests yet.
- Build Caching: Opportunities for additional caching to speed up CI runs.
The CI/CD pipeline for the Tangle Blueprint framework provides automated validation for code changes, ensuring consistent quality across the codebase. The combination of formatting checks, linting, and comprehensive testing helps maintain high code quality standards and prevents regressions as the codebase evolves. Developers can use the Nix development environment to run the same checks locally, creating a seamless workflow between local development and CI validation.
No significant content available.
- Overview: Overview
- Architecture Overview: Architecture Overview
- Key Concepts: Key Concepts
- Protocol Support: Protocol Support
- Installation: Installation
- Creating Your First Blueprint: Creating Your First Blueprint
- Example Blueprints: Example Blueprints
- Blueprint SDK: Blueprint SDK
- Core Components: Core Components
- Job System: Job System
- Router: Router
- Networking: Networking
- Keystore: Keystore
- Blueprint Runner: Blueprint Runner
- Runner Configuration: Runner Configuration
- Job Execution Flow: Job Execution Flow
- Blueprint Manager: Blueprint Manager
- Event Handling: Event Handling
- Blueprint Sources: Blueprint Sources
- CLI Reference: CLI Reference
- Blueprint Commands: Blueprint Commands
- Key Management Commands: Key Management Commands
- Deployment Options: Deployment Options
- Development: Development
- Build Environment: Build Environment
- Testing Framework: Testing Framework
- CI/CD: CI/CD
- Advanced Topics: Advanced Topics
- Networking Extensions: Networking Extensions
- Macro System: Macro System
- Custom Protocol Integration: Custom Protocol Integration
- Auto-refresh not enabled yet.
Blueprint's networking layer can be extended with specialized protocols to support advanced use cases like signature aggregation and round-based consensus.
Signature aggregation allows multiple signatures to be combined into a single, verifiable signature, which is essential for:
- Threshold signature schemes
- Multi-signature wallets
- Consensus mechanisms requiring signature aggregation
- Reducing on-chain verification costs
Blueprint provides extensible storage mechanisms through the blueprint-stores module.
The blueprint-store-local-database provides persistent local storage for blueprints.
Usage Flow:
- Stores data via
blueprint-stores - Persists using File-Based Storage
- JSON Serialization
Implementation Details:
- Data Layout: Blueprint
- Local Filesystem
- core/Cargo.toml
- crypto/Cargo.toml
- evm-extra/Cargo.toml
- macros/Cargo.toml
- networking/Cargo.toml
- stores/local-database/Cargo.toml
The Aggregated Signature Gossip Extension provides a specialized protocol for efficiently aggregating cryptographic signatures across network participants, particularly useful for threshold signature schemes and BLS signature aggregation.
- Participant management for tracking network peers involved in aggregation
- Efficient signature collection and verification
- Threshold detection for signature completion
- Gossip-based signature propagation over libp2p
- Support for BLS and BN254 signature schemes
To implement the extension, enable the "aggregation" feature in the crypto crates:
blueprint-crypto = { features = ["aggregation"] }
blueprint-crypto-bls = { features = ["aggregation"] }
blueprint-crypto-bn254 = { features = ["aggregation"] }- Gossip Protocol
- ParticipantManager
- SignatureAggregator
- GossipHandler
- Participants Registry
- Signature Collection
- Network Service
- Signature Topic
- P2P Network
The Round-Based Protocol Extension provides infrastructure for implementing round-based consensus protocols and distributed algorithms. It integrates with the round-based crate to simplify development of multi-round protocols.
Key Features:
- State management for multi-round protocols
- Message serialization and routing
- Round advancement and timeout handling
- Integration with Blueprint's networking layer
Components:
- Round Manager
- Message Handler
- Protocol State
- Protocol Rounds
- Round Messages
- Network Service
- User-Defined Protocol
- DKG Protocol
- Threshold Signing
Sources: 1-70
The Blueprint framework provides a powerful macro system to simplify development and reduce boilerplate code.
The blueprint-macros crate provides procedural macros for enhancing Blueprint development. Key macros include:
job: Defines job handlers with automatic type conversion and error handlingjob_id: Creates type-safe job identifiersblueprint: Sets up the blueprint structure and lifecycle hookscontext: Defines context extensions for dependency injection
Target Code
Macro Usage Flow
Blueprint Macros
Annotated with
Generates
Uses
Annotated with
Generates
Annotated with
Generates
Integrated with
Used by
Enhances
blueprint-macros
job Macro
blueprint Macro
job_id Macro
context Macro
Job Definition
Job Implementation
Blueprint Definition
Blueprint Implementation
Context Definition
Context Extension
Router
Blueprint Runner
The blueprint-context-derive crate provides specialized derive macros for context extensions:
- Automatic implementation of context access traits
- Protocol-specific context extensions
- Compile-time verification of context requirements
Generated Code
User Structs
Context Derive Macros
Derives
Optionally derives
Optionally derives
Optionally derives
Implements
Implements
Implements
Implements
blueprint-context-derive
Derive Context
Derive EVM Context
Derive Tangle Context
Derive Network Context
User-Defined Struct
Context Access Traits
EVM-Specific Access
Tangle-Specific Access
Network-Specific Access
The Blueprint framework supports a wide range of cryptographic primitives through its modular crypto architecture.
The blueprint-crypto package serves as a metapackage that integrates various cryptographic implementations:
| Scheme | Crate | Features | Use Cases |
|---|---|---|---|
| K256 | blueprint-crypto-k256 |
EVM signatures | Ethereum compatibility |
| SR25519 | blueprint-crypto-sr25519 |
Schnorrkel | Tangle compatibility |
| ED25519 | blueprint-crypto-ed25519 |
Zebra | General purpose |
| BLS | blueprint-crypto-bls |
Signature aggregation | Threshold signatures |
| BN254 | blueprint-crypto-bn254 |
Pairing-based crypto | Zero-knowledge proofs |
This modular structure allows developers to include only the cryptographic primitives required for their specific use case.
A key advanced feature is signature aggregation, enabled through the aggregation feature:
[features]
aggregation = [
"blueprint-crypto-sp-core/aggregation",
"blueprint-crypto-bls/aggregation",
"blueprint-crypto-bn254/aggregation",
]- Persistent storage between blueprint restarts
- JSON serialization for data storage
- Filesystem-based storage solution
- Simple API for data storage and retrieval
The blueprint-producers-extra crate provides a cron job producer that enables scheduled execution of jobs:
Integration
Dependencies
Cron Job System
Enables
Provides
Uses
Runs on
Produces
Processed by
Executes
blueprint-producers-extra
cron feature
Cron Job Producer
tokio-cron-scheduler
chrono
tokio
Job Calls
Router
Job Handlers
To use the cron job producer, enable the cron feature:
blueprint-producers-extra = { features = ["cron"] }
The cron job producer allows:
- Scheduling jobs using cron expressions
- Recurring job execution
- Time-based automation within blueprints
The blueprint-evm-extra package provides advanced utilities for working with EVM-compatible blockchains.
Key features include:
- Enhanced EVM client capabilities
- Pubsub functionality for blockchain events
- Advanced utilities for smart contract interaction
- Transaction management tools
For advanced testing scenarios, Blueprint provides specialized testing utilities.
blueprint-testing-utils
blueprint-core-testing-utils
blueprint-anvil-testing-utils
blueprint-tangle-testing-utils
blueprint-eigenlayer-testing-utils
These testing utilities allow:
- Protocol-specific test environments
- Deterministic testing of blockchain interactions
- Simulated network conditions
- Integration testing across multiple protocols
Feature Flags:
- anvil feature
- tangle feature
- eigenlayer feature
Blueprint's modular architecture allows for integration with custom blockchain protocols beyond the built-in support for Tangle, EVM, and Eigenlayer.
The custom protocol integration process typically involves:
- Implementing client interfaces for the new protocol
- Adding cryptographic primitives required by the protocol
- Extending the networking layer if needed
- Creating protocol-specific context extensions
- Developing custom job handlers for protocol interactions
Blueprint Integration
Implementation Steps
Protocol Integration
Implement
Extend
Registers with
Used by
Enhances
Client Implementation
Crypto Integration
Network Extension
Client Traits
Crypto Traits
Network Service
Blueprint Runner
Key Store
P2P Network