Skip to content

Instantly share code, notes, and snippets.

@drewstone
Created June 8, 2025 21:28
Show Gist options
  • Select an option

  • Save drewstone/f1ca772d9a85d8fe186d3ab097508cfe to your computer and use it in GitHub Desktop.

Select an option

Save drewstone/f1ca772d9a85d8fe186d3ab097508cfe to your computer and use it in GitHub Desktop.

Hierarchical Summary

Source: https://deepwiki.com/tangle-network/blueprint Generated: 2025-06-09T00:21:27.280308 Command Type: deep_crawl Original Size: 573.53 KB (587,184 chars) Summary Size: 410.53 KB (420,163 chars) Compression: 28.4% reduction (1.4:1 ratio)


Documentation

Overview

Overview of the Blueprint Framework

The Blueprint framework provides a comprehensive set of tools for building decentralized applications across multiple blockchain networks. Key advantages include:

  • Protocol Agnostic: Write once, deploy to multiple blockchain environments.
  • Modular Design: Use only the components you need.
  • Extensible: Add custom functionality through middleware and extensions.
  • Secure: Strong cryptographic foundations and key management.
  • Networked: Built-in P2P networking capabilities.

For more detailed information about specific aspects of the framework, see the related documentation pages on:

Relevant Source Files

On this page

Getting Started

Getting Started

To start using the Blueprint framework, you first need to install the cargo-tangle CLI tool:

curl --proto '=https' --tlsv1.2 -LsSf https://github.com/tangle-network/gadget/releases/download/cargo-tangle/v0.1.1-beta.7/cargo-tangle-installer.sh | sh

or install from source:

cargo install cargo-tangle --git https://github.com/tangle-network/gadget --force

Once installed, you can create your first blueprint:

# Create a new blueprint named "my_blueprint"
cargo tangle blueprint create --name my_blueprint

# Navigate into the blueprint directory and build
cd my_blueprint
cargo build

Blueprint Architecture

Blueprint Architecture

The Blueprint framework is a comprehensive toolkit for building, deploying, and managing decentralized applications (dApps) across multiple blockchain environments. It supports multiple blockchain protocols, including Tangle Network, Eigenlayer, and EVM-compatible chains.

Framework Architecture

Blueprint's architecture is built around a modular, extensible core system with specialized components for different blockchain environments.

- Protocol Integrations
- cargo-tangle CLI
- Blueprint SDK
- Blueprint Runner
- Blueprint Manager
- Core Components
- Cryptography
- Blockchain Clients
- Networking
- Job Router
- Keystore
- Blueprint Sources
- Tangle Network
- EVM Clients
- Eigenlayer

Key Components

  • CLI (cargo-tangle): Command-line interface for creating, managing, and deploying blueprints.
  • Blueprint SDK: Core toolkit with various components for different functionalities.
  • Blueprint Runner: Executes blueprint operations in a protocol-specific manner.
  • Blueprint Manager: Orchestrates blueprint lifecycle, handling events and sources.

Blueprint Manager Functions

  • Monitors events from blockchain networks.
  • Manages the lifecycle of blueprint services.
  • Fetches and spawns blueprints based on events.

Blueprint Runner Functions

  • Configures the execution environment.
  • Coordinates job execution.
  • Provides protocol-specific runtime capabilities.

Protocol Support

The Blueprint framework supports multiple blockchain protocols through specialized client implementations and protocol-specific extensions.

- Deployment
- Protocol Clients
- Deploy Tangle
- Deploy EVM
- Deploy Eigenlayer
- Protocol Extensions
- blueprint-tangle-extra
- blueprint-evm-extra
- blueprint-eigenlayer-extra
- blueprint-clients
- blueprint-client-tangle
- blueprint-client-evm
- blueprint-client-eigenlayer
- cargo-tangle CLI

Core Components

Key SDK Components

  • Core: Fundamental abstractions for the job system and blueprint building blocks.
  • Router: Directs job requests to appropriate handlers.
  • Crypto: Multi-chain cryptographic primitives for various key types and signature schemes.
  • Clients: Protocol-specific clients for blockchain interactions.
  • Networking: P2P networking capabilities based on libp2p.
  • Keystore: Secure key management for multiple key types.

Job and Router System

The job system provides a unified way to handle various tasks across different protocols.

Job Execution Flow
Route Match
No Match
Always Route
JobCall (with JobId)
Router
Job Handler
Fallback Handler
Always Handler
JobResult

Blueprint SDK

SDK Components

The Blueprint SDK is highly modular, consisting of multiple crates that can be used independently:

Blueprint SDK
blueprint-core
blueprint-router
blueprint-runner
blueprint-crypto
blueprint-keystore
blueprint-clients
blueprint-networking
blueprint-stores
crypto-core
crypto-k256
crypto-sr25519
crypto-ed25519
crypto-bls
crypto-bn254
client-core
client-tangle
client-evm
client-eigenlayer

Job System

Job System Components

  • JobCall: A request to execute a specific job with associated data
  • JobId: A unique identifier for jobs
  • Router: Examines incoming JobCalls and directs them to appropriate handlers
  • Handlers: Functions that execute the logic for specific jobs
  • JobResult: The result returned after job execution

Blueprint Manager and Runner

The Blueprint Manager and Runner handle the lifecycle of blueprints from deployment to execution.

Event Flow
Blueprint Runner
Source Handlers
Blueprint Manager
Configure
Fetch & Spawn
Execute
Event Handler
Blueprint Manager
Blueprint Source Handler
GitHub Source
Container Source
Test Source
Blueprint Runner Builder
Finalized Runner
Blockchain Events
Job Execution

Networking and Cryptography

Networking and Cryptography

The Blueprint framework provides robust networking and cryptographic capabilities to support secure, distributed applications.

Key Management

  • Keystore: Secure key storage with multiple backends (file-based, in-memory, remote).
  • Local Signer
  • Remote Signer
  • Hardware Signer

Cryptography

Supports multiple signature schemes and key types:

  • K256 (secp256k1): Used for Ethereum and other EVM chains.
  • SR25519: Used for Tangle and other Substrate-based blockchains.
  • ED25519: General-purpose EdDSA implementation.
  • BLS: Boneh-Lynn-Shacham signatures for aggregated signing.
  • BN254: Barreto-Naehrig curves for zero-knowledge proofs.

Networking

Built on libp2p, providing:

  • Peer-to-peer communication: Direct communication between nodes.
  • Discovery: Finding and connecting to peers.
  • Extensions: Support for specialized protocols like aggregated signatures and round-based protocols.

Supported Protocols

  • Tangle Network: Native support for the Tangle blockchain.
  • EVM-compatible chains: Support for Ethereum and other EVM chains.
  • Eigenlayer: Support for Eigenlayer's consensus mechanisms.

Deployment Tools

CLI commands for deploying to that environment.

Deployment

Deploy your blueprint to the Tangle Network

cargo tangle blueprint deploy --rpc-url wss://rpc.tangle.tools --package my_blueprint

Documentation

Getting Started

Blueprint SDK Overview

The Blueprint SDK is a comprehensive framework for building, deploying, and managing decentralized applications across multiple blockchain environments. It provides a modular, protocol-agnostic foundation that enables developers to create services that can run on Tangle Network, Eigenlayer, and EVM-compatible blockchains.

Key Links

Documentation Structure

Installation

Installation

To use the Blueprint SDK in your project, add it to your Cargo.toml:

[dependencies]
blueprint-sdk = { version = "0.1.0-alpha.7", features = ["std", "tracing"] }

You can enable additional features based on your requirements:

[dependencies]
blueprint-sdk = { version = "0.1.0-alpha.7", features = ["std", "tracing", "tangle", "networking"] }

Creating a Basic Blueprint

A blueprint is built using the job system provided by the SDK. Here's a simplified example of how to create and configure a blueprint:

  1. Define your job handlers
  2. Configure a router to map job IDs to handlers
  3. Create a runner to execute the jobs

Blueprint Creation Flow:

  • Define Job Handlers
  • Configure Router
  • Create Runner
  • Build Runner
  • Use Runner for Job Execution

Core Components

Blueprint SDK

Relevant Source Files

Blueprint SDK Architecture

  • blueprint-sdk
  • blueprint-core
  • blueprint-router
  • blueprint-runner
  • blueprint-crypto
  • blueprint-keystore
  • blueprint-clients
  • blueprint-networking
  • blueprint-contexts
  • blueprint-stores
  • blueprint-chain-setup

Core Components

The SDK is built around several core components that form the foundation of the blueprint system:

Blueprint Core

The blueprint-core crate provides the fundamental abstractions and types for the entire system, including the Job system.

Job Router

The blueprint-router handles the routing of jobs to appropriate handlers. It supports:

  • Exact matches
  • Fallback routes
  • "Always" routes that execute for every job call

Job Routing System

Match found
No match found
Always executes

Router Configuration

router.route(job_id, handler)
router.fallback(handler)
router.always(handler)
router.layer(middleware)

Job Call Structure

JobCall(job_id, payload)
Router
Job Handler
Fallback Handler
Always Handler
JobResult

Blueprint Runner

Blueprint Runner

The blueprint-runner is responsible for executing blueprints and managing their lifecycle. It provides the execution environment for the jobs and routes them to the appropriate handlers.

Clients

The blueprint-clients crate provides protocol-specific client implementations for interacting with different blockchain networks. It supports:

  • Tangle Network via blueprint-client-tangle
  • EVM-compatible chains via blueprint-client-evm
  • Eigenlayer via blueprint-client-eigenlayer
Client Architecture
blueprint-client-core
blueprint-client-tangle
blueprint-client-evm
blueprint-client-eigenlayer
tangle-subxt
alloy libraries
eigensdk

Networking

Networking

The blueprint-networking crate provides peer-to-peer networking capabilities built on libp2p, including extensions for different networking protocols such as round-based protocols and aggregated signature gossip.

Contexts

The blueprint-contexts crate provides context providers for:

  • Tangle Network
  • EVM-compatible chains
  • Eigenlayer
  • Networking
  • Keystore

Feature Flags

The Blueprint SDK employs a feature-flag system for enabling/disabling components based on application needs:

  • std: Enables standard library support
  • web: Enables support for web targets
  • tracing: Enables tracing support for debugging and monitoring

Cryptography

Cryptography

The blueprint-crypto crate provides cryptographic utilities for key generation, signing, and verification. It supports multiple cryptographic primitives:

  • K256 (secp256k1) via blueprint-crypto-k256
  • SR25519 via blueprint-crypto-sr25519
  • ED25519 via blueprint-crypto-ed25519
  • BLS via blueprint-crypto-bls
  • BN254 via blueprint-crypto-bn254

Keystore

The blueprint-keystore manages cryptographic keys, providing secure storage and access to keys for different cryptographic schemes and protocols.

Feature Flags

Protocol Support

  • tangle: Enables Tangle Network support
  • evm: Enables EVM support
  • eigenlayer: Enables Eigenlayer support

Networking Support

  • networking: Enables peer-to-peer networking support
  • round-based-compat: Enables round-based protocol extensions

Utilities

  • local-store: Enables local key-value stores
  • macros: Enables all macros from subcrates
  • build: Enables build-time utilities
  • testing: Enables testing utilities
  • cronjob: Enables cron job producer

Feature Flag Dependencies

eigenlayer
evm
testing
std
build
macros
tangle
blueprint-clients/tangle
blueprint-contexts/tangle
blueprint-runner/tangle
blueprint-clients/evm
blueprint-contexts/evm
networking
blueprint-contexts/networking
blueprint-runner/networking
blueprint-networking

Using the SDK

To interact with the Tangle Network:

blueprint-sdk = { version = "0.1.0-alpha.7", features = ["tangle"] }

EVM-Compatible Chains

Enable the evm feature:

blueprint-sdk = { version = "0.1.0-alpha.7", features = ["evm"] }

Eigenlayer

Enable the eigenlayer feature:

blueprint-sdk = { version = "0.1.0-alpha.7", features = ["eigenlayer"] }

Testing

Enable the testing feature to access testing utilities:

blueprint-sdk = { version = "0.1.0-alpha.7", features = ["testing"] }

Job System

Job System and Router

The job system is the core abstraction in the Blueprint SDK. Jobs are identified by a JobId and can carry arbitrary payload data. The router maps these jobs to handlers, which process the jobs and return results.

A job handler can be any function or closure that takes a specific input type and returns a JobResult:

fn my_handler(input: MyInputType) -> JobResult<MyOutputType>

The router is configured by registering handlers for specific job IDs:

let mut router = Router::new();
router.route("my_job_id", my_handler);
router.fallback(fallback_handler);
router.always(always_handler);

Protocol Integration

The Blueprint SDK provides seamless integration with different blockchain protocols through its client implementations.

Tangle Network

Enable the tangle feature to access Tangle Network-specific functionality:

blueprint-sdk = { version = "0.1.0-alpha.7", features = ["tangle"] }

Testing Utilities

This provides access to:

  • blueprint-testing-utils: General testing utilities
  • blueprint-core-testing-utils: Core testing primitives
  • blueprint-anvil-testing-utils: Utilities for testing with Anvil (Ethereum)
  • blueprint-tangle-testing-utils: Utilities for testing with Tangle Network
  • blueprint-eigenlayer-testing-utils: Utilities for testing with Eigenlayer

Advanced Features

Blockchain Environment Setup

Chain Setup

The blueprint-chain-setup crate provides utilities for setting up and configuring different blockchain environments for development and testing.

Networking Extensions

The Blueprint SDK includes extensions for advanced networking patterns:

  • blueprint-networking-round-based-extension: Support for round-based protocols
  • blueprint-networking-agg-sig-gossip-extension: Support for aggregated signature gossip

Stores

The blueprint-stores crate provides key-value storage capabilities for blueprints. Enable the local-store feature to use local storage backends.

Conclusion

Conclusion

The Blueprint SDK provides a comprehensive, modular framework for building decentralized applications across multiple blockchain environments. Its flexible architecture allows developers to include only the components they need, while providing a consistent programming model regardless of the underlying blockchain protocol.

For more detailed information about specific components of the SDK, refer to the following pages:

Additional Resources

No significant content available.

Documentation

Introduction to Tangle Network Blueprint Framework

No significant content available.

Getting Started with Blueprints

Getting Started with Blueprints

Installation Instructions

Installing the Tangle CLI

You can install the CLI using the installation script (recommended):

curl --proto '=https' --tlsv1.2 -LsSf https://github.com/tangle-network/gadget/releases/download/cargo-tangle/v0.1.1-beta.7/cargo-tangle-installer.sh | sh

Alternatively, you can install from source:

cargo install cargo-tangle --git https://github.com/tangle-network/gadget --force

Creating a New Blueprint

The Tangle CLI provides a straightforward way to create a new blueprint project. The basic command is:

cargo tangle blueprint create --name <blueprint_name>

Creating Your First Blueprint

Creating Your First Blueprint

Prerequisites

Before creating your first blueprint, ensure you have the following installed:

  • Rust and Cargo
  • OpenSSL development packages
  • The Tangle CLI (cargo-tangle)

Installing Dependencies

For Ubuntu/Debian:

sudo apt update && sudo apt install build-essential cmake libssl-dev pkg-config

For macOS:

brew install openssl cmake

Creating a New Blueprint

This generates a new blueprint project with the specified name using the default template. The command creates a directory with your blueprint name, initializes a Git repository, and sets up the basic structure for your project.

Blueprint Types

The CLI supports different blueprint types:

  1. Tangle Blueprint (default) - For building services on Tangle Network
  2. Eigenlayer BLS Blueprint - For building BLS-based services on Eigenlayer
  3. Eigenlayer ECDSA Blueprint - For building ECDSA-based services on Eigenlayer

You can specify a blueprint type using the appropriate flag. The CLI will guide you through the process.

Blueprint Creation Workflow

Relevant source files

Blueprint Project Structure

Blueprint Project Structure

When you create a new blueprint, you'll get a project with a structure similar to the following:

Your Blueprint Project
├── Cargo.toml (Project Config)
├── src/ (Source Code)
│   ├── main.rs (Entry Point)
│   ├── jobs.rs (Job Definitions)
│   └── lib.rs (Blueprint Library)
└── tests/ (Test Files)

Commands

  • Create a new blueprint:

    cargo tangle blueprint create
    
    • Specify name (--name)
    • Specify blueprint type (optional)
    • Specify template source (optional)
    • Generate from template
  • Build the project:

    cargo build
    
  • Test the project:

    cargo test
    
  • Deploy the blueprint:

    cargo tangle blueprint deploy
    
    • Register blueprint (optional)
    • Run blueprint service

Sources

Building and Deploying Blueprints

Key Components in a Blueprint Project

  • Cargo.toml - Project configuration and dependencies
  • src/main.rs - The entry point for your blueprint service
  • src/lib.rs - Core library code for your blueprint
  • src/jobs.rs - Definitions for jobs that your blueprint can process
  • tests/ - Unit and integration tests for your blueprint

Building Your Blueprint

To build your blueprint project, use the following commands:

cd my_blueprint
cargo build

For production deployment, build with the release profile:

cargo build --release

Deployment Process

  1. Deploy to Tangle Network
  2. Deploy to Eigenlayer
  3. Set Tangle-specific options
  4. Set Eigenlayer-specific options
  5. Generate Blueprint ID
  6. Deploy Smart Contracts
  7. Register as Operator (optional)
  8. Register as AVS Operator
  9. Run Blueprint Service on Tangle
  10. Run Blueprint Service on Eigenlayer

Testing and Deployment

Testing Your Blueprint

To ensure your blueprint works correctly, run the tests:

cargo test

Deploying Your Blueprint

After building your blueprint, you can deploy it to the target network. The Tangle CLI provides deployment commands for different networks:

cargo tangle blueprint deploy tangle --ws-rpc-url <WS_URL> --keystore-path <KEYSTORE_PATH> --package <PACKAGE_NAME>

For local development and testing, you can use the --devnet flag to automatically start a local testnet:

cargo tangle blueprint deploy tangle --devnet --package <PACKAGE_NAME>

Configuration for Blueprint Services

Blueprint Environment Configuration

When running your blueprint, you'll need to provide environment configuration. The deployment command will help set this up, but understanding the configuration elements is important:

Configuration Element Description Default Value
HTTP RPC URL HTTP endpoint of the target blockchain http://127.0.0.1:9944
WebSocket RPC URL WebSocket endpoint of the target blockchain ws://127.0.0.1:9944
Keystore Path Path to the keystore containing your keys ./keystore
Protocol Target protocol (Tangle, Eigenlayer) tangle
Blueprint ID ID of the deployed blueprint (Tangle only) (Required for running)
Service ID ID of the service instance (Tangle only) (Required for running)

The BlueprintEnvironment holds these configuration values and is passed to the blueprint runner.

Sources: 209-241 163-195

Job System Architecture

Blueprint Framework Components

  • Creates, builds with, produces, runs with, uses, manages, sends results to, routes, process, produce.
  • Cargo-tangle CLI:
    cargo tangle blueprint run --protocol tangle --rpc-url http://127.0.0.1:9944 --keystore-path ./keystore

Understanding the Job System

  • Job System Architecture:

    • Produces → Routed to → Dispatches toProcesses and returnsSent to
    • Event ProducerJobCall (with JobId)RouterJob Handler FunctionJobResultJob Consumer
  • Blueprint Definition:

    1. Job Producers: Sources of events that create job calls.
    2. Job Handlers: Functions that process job calls.
    3. Job Router: Routes job calls to appropriate handlers.
    4. Job Consumers: Destinations for job results.
  • Blueprint Runner: Orchestrates the flow, ensuring jobs are routed and results distributed.

Next Steps

  • Adding new jobs to handle different events or tasks.
  • Implementing custom job producers for specific needs.
  • Extending your blueprint with networking capabilities.
  • Integrating with on-chain logic for complex use cases.

Resources and Further Learning

For more detailed information on these topics, refer to the Blueprint SDK documentation. To learn about example blueprints that showcase different capabilities, see Example Blueprints. Sources: 14-21 30-118.

Consultation and Research Notes

  • Ask Devin about tangle-network/blueprint
  • Deep Research

Documentation

Introduction

No significant content available.

Getting Started

Basic Keystore Setup

// Create a simple in-memory keystore
let config = KeystoreConfig::new().in_memory(true);
let keystore = Keystore::new(config)?;

// Or with file storage
let config = KeystoreConfig::new().fs_root("path/to/keystore");
let keystore = Keystore::new(config)?;

Installation

Prerequisites

Prerequisites

Before installing the Tangle Blueprint framework, ensure your system has the following dependencies:

  • Rust (version 1.86 or later)
  • OpenSSL development packages
  • Basic build tools (compiler, linker)

Platform-Specific Dependencies

Platform Prerequisites
apt install
brew install
Requires
Ubuntu/Debian
build-essential, cmake, libssl-dev, pkg-config
macOS
openssl, cmake
Windows
Windows Subsystem for Linux (WSL2)

Installation Methods

Installation

Relevant Source Files

Installation Methods

Ubuntu/Debian

sudo apt update && sudo apt install build-essential cmake libssl-dev pkg-config

macOS

brew install openssl cmake

Windows

Install Windows Subsystem for Linux (WSL2) and follow the Ubuntu/Debian instructions.

CLI Installation Methods

The cargo-tangle CLI is the primary tool for interacting with the Blueprint framework.

Option 1: Installation Script (Recommended)

curl --proto '=https' --tlsv1.2 -LsSf https://github.com/tangle-network/gadget/releases/download/cargo-tangle/v0.1.1-beta.7/cargo-tangle-installer.sh | sh

Option 2: Install from Source

cargo install cargo-tangle --git https://github.com/tangle-network/blueprint --force

Rust Toolchain Configuration

The Blueprint framework requires specific Rust components and versions to function properly.

Available Commands

  • blueprint create
  • blueprint deploy
  • generate-keys
  • list-blueprints
  • blueprint run

Setting Up the Development Environment

Rust Toolchain Requirements

  • Rust Version: 1.86
  • Required Components:
    • cargo
    • rustfmt
    • clippy
    • rust-src
[toolchain]
channel = "1.86"
components = ["cargo", "rustfmt", "clippy", "rust-src"]
profile = "minimal"

Development Environment Setup

Nix Development Environment

To use the Nix development environment:

  1. Install Nix with flakes support.
  2. Run:
nix develop

This will provide a shell with all necessary tools and dependencies installed.

Keystore Configuration

The Blueprint framework uses a flexible keystore system for managing cryptographic keys. Several backend options are available:

  • Keystore Storage Options:
    • InMemoryStorage (no persistence)
    • FileStorage (file system path)
    • SubstrateStorage (Substrate LocalKeystore)

Key Management

Key Types

Key Type Description Usage
SR25519 Schnorrkel/Ristretto x25519 Substrate/Tangle signatures
ED25519 Edwards-curve Digital Signature Algorithm General purpose signatures
ECDSA Elliptic Curve Digital Signature Algorithm Ethereum/EVM signatures
BLS Boneh–Lynn–Shacham signatures Aggregate signatures for Eigenlayer
BN254 Barreto-Naehrig curve Zero-knowledge proofs

Key Management

To generate a new key pair:

cargo tangle blueprint generate-keys -k <KEY_TYPE> -p <PATH> -s <SURI/SEED> --show-secret

Using the CLI

Key Generation

Where:

  • <KEY_TYPE>: sr25519, ecdsa, bls_bn254, ed25519, bls381
  • <PATH>: Directory to store the key (optional)
  • <SURI/SEED>: Seed phrase or string (optional)
  • --show-secret: Displays the private key (optional)

Example:

cargo tangle blueprint generate-keys -k sr25519 -p ./my-keystore --show-secret

Verification

To verify that the installation was successful:

cargo tangle --version

You should see the version number of the installed CLI.

To verify that all components are working correctly, create a simple test blueprint:

cargo tangle blueprint create --name test_blueprint
cargo build

If the build succeeds, your installation is working properly.

Troubleshooting

Missing Dependencies

If you encounter errors about missing libraries:

  • For OpenSSL issues: ensure you have libssl-dev (Ubuntu/Debian) or openssl (macOS) installed
  • For build tools: install build-essential (Ubuntu/Debian) or Xcode command line tools (macOS)

Keystore Issues

If you encounter keystore-related errors:

  • Check file permissions on the keystore directory
  • Ensure the path exists and is writable
  • For permission errors, try running with elevated privileges or adjusting directory permissions

Sources: 1-159

Installation Components Overview

This diagram illustrates the complete installation architecture and how various components of the Blueprint framework interact:


## Advanced Topics

- Ask Devin about tangle-network/blueprint
- Deep Research

## https://deepwiki.com/tangle-network/blueprint/1.1-architecture-overview

## Documentation

## Introduction to Tangle Network's Blueprint Framework

### Tangle Network's Blueprint Framework Overview

- **Overview**: [Overview](https://deepwiki.com/tangle-network/blueprint/1-overview)
- **Architecture Overview**: [Architecture Overview](https://deepwiki.com/tangle-network/blueprint/1.1-architecture-overview)
- **Key Concepts**: [Key Concepts](https://deepwiki.com/tangle-network/blueprint/1.2-key-concepts)
- **Protocol Support**: [Protocol Support](https://deepwiki.com/tangle-network/blueprint/1.3-protocol-support)

### Getting Started
- **Installation**: [Installation](https://deepwiki.com/tangle-network/blueprint/2.1-installation)
- **Creating Your First Blueprint**: [Creating Your First Blueprint](https://deepwiki.com/tangle-network/blueprint/2.2-creating-your-first-blueprint)
- **Example Blueprints**: [Example Blueprints](https://deepwiki.com/tangle-network/blueprint/2.3-example-blueprints)

### Blueprint SDK
- **Core Components**: [Core Components](https://deepwiki.com/tangle-network/blueprint/3.1-core-components)
- **Job System**: [Job System](https://deepwiki.com/tangle-network/blueprint/3.2-job-system)
- **Router**: [Router](https://deepwiki.com/tangle-network/blueprint/3.3-router)
- **Networking**: [Networking](https://deepwiki.com/tangle-network/blueprint/3.4-networking)
- **Keystore**: [Keystore](https://deepwiki.com/tangle-network/blueprint/3.5-keystore)

### Blueprint Runner
- **Runner Configuration**: [Runner Configuration](https://deepwiki.com/tangle-network/blueprint/4.1-runner-configuration)
- **Job Execution Flow**: [Job Execution Flow](https://deepwiki.com/tangle-network/blueprint/4.2-job-execution-flow)

### Blueprint Manager
- **Event Handling**: [Event Handling](https://deepwiki.com/tangle-network/blueprint/5.1-event-handling)
- **Blueprint Sources**: [Blueprint Sources](https://deepwiki.com/tangle-network/blueprint/5.2-blueprint-sources)

### CLI Reference
- **Blueprint Commands**: [Blueprint Commands](https://deepwiki.com/tangle-network/blueprint/6.1-blueprint-commands)
- **Key Management Commands**: [Key Management Commands](https://deepwiki.com/tangle-network/blueprint/6.2-key-management-commands)
- **Deployment Options**: [Deployment Options](https://deepwiki.com/tangle-network/blueprint/6.3-deployment-options)

### Development
- **Build Environment**: [Build Environment](https://deepwiki.com/tangle-network/blueprint/7.1-build-environment)
- **Testing Framework**: [Testing Framework](https://deepwiki.com/tangle-network/blueprint/7.2-testing-framework)
- **CI/CD**: [CI/CD](https://deepwiki.com/tangle-network/blueprint/7.3-cicd)

### Advanced Topics
- **Networking Extensions**: [Networking Extensions](https://deepwiki.com/tangle-network/blueprint/8.1-networking-extensions)
- **Macro System**: [Macro System](https://deepwiki.com/tangle-network/blueprint/8.2-macro-system)
- **Custom Protocol Integration**: [Custom Protocol Integration](https://deepwiki.com/tangle-network/blueprint/8.3-custom-protocol-integration)

## Getting Started

No significant content to extract.

## Architecture Overview

# Architecture Overview

## Relevant Source Files
- [Cargo.lock](https://github.com/tangle-network/blueprint/blob/af8278cb/Cargo.lock)
- [Cargo.toml](https://github.com/tangle-network/blueprint/blob/af8278cb/Cargo.toml)
- [README.md](https://github.com/tangle-network/blueprint/blob/af8278cb/README.md)
- [CLI Cargo.toml](https://github.com/tangle-network/blueprint/blob/af8278cb/cli/Cargo.toml)
- [CLI README.md](https://github.com/tangle-network/blueprint/blob/af8278cb/cli/README.md)
- [Create Command](https://github.com/tangle-network/blueprint/blob/af8278cb/cli/src/command/create/mod.rs)
- [Keys Command](https://github.com/tangle-network/blueprint/blob/af8278cb/cli/src/command/keys.rs)
- [Eigenlayer Command](https://github.com/tangle-network/blueprint/blob/af8278cb/cli/src/command/run/eigenlayer.rs)
- [Client Cargo.toml (Eigenlayer)](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/clients/eigenlayer/Cargo.toml)
- [Client Cargo.toml (EVM)](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/clients/evm/Cargo.toml)
- [Client Cargo.toml (Tangle)](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/clients/tangle/Cargo.toml)
- [Contexts Cargo.toml](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/contexts/Cargo.toml)
- [Manager Cargo.toml](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/manager/Cargo.toml)
- [SDK Cargo.toml](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/sdk/Cargo.toml)
- [Testing Utils (Anvil)](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/testing-utils/anvil/Cargo.toml)
- [Testing Utils (Core)](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/testing-utils/core/Cargo.toml)
- [Testing Utils (Eigenlayer)](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/testing-utils/eigenlayer/Cargo.toml)
- [Testing Utils (Tangle)](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/testing-utils/tangle/Cargo.toml)

## System Overview
The Blueprint framework features a modular architecture for developing decentralized applications across Tangle Network, Eigenlayer, and EVM-compatible blockchains. Key components include:
- **Job System**: Defines and routes units of work.
- **Protocol Integrations**: Supports Tangle Network, EVM-compatible blockchains, and Eigenlayer.
- **Networking Layer**: Facilitates peer-to-peer communication.
- **Keystore**: Manages secure key storage.
- **Manager**: Handles the lifecycle of blueprints.
- **CLI**: Provides user interaction capabilities.

This architecture allows developers to create complex decentralized applications without needing to manage blockchain-specific implementation details.

## Blueprint SDK Architecture

## Blueprint SDK Architecture
The Blueprint SDK is the core of the framework, providing a comprehensive set of tools and libraries for blueprint development. It follows a highly modular design, allowing developers to include only the components they need for their specific use case.

### Key Components
- **Blockchain Clients**
  - `blueprint-core`
  - `blueprint-router`
  - `blueprint-runner`
  - `blueprint-clients`
  - `blueprint-crypto`
  - `blueprint-networking`
  - `blueprint-keystore`
  - `blueprint-contexts`
  - `blueprint-stores`

- **Protocol Extras**
  - `blueprint-tangle-extra`
  - `blueprint-evm-extra`
  - `blueprint-eigenlayer-extra`
  - `blueprint-crypto-core`
  - `blueprint-crypto-k256`
  - `blueprint-crypto-sr25519`
  - `blueprint-crypto-ed25519`
  - `blueprint-crypto-bls`
  - `blueprint-crypto-bn254`
  - `blueprint-client-core`
  - `blueprint-client-tangle`
  - `blueprint-client-evm`
  - `blueprint-client-eigenlayer`

Sources: [15-26](https://github.com/tangle-network/blueprint/blob/af8278cb/README.md#L15-L26) [40-119](https://github.com/tangle-network/blueprint/blob/af8278cb/Cargo.toml#L40-L119) [15-35](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/sdk/Cargo.toml#L15-L35) [28-73](https://github.com/tangle-network/blueprint/blob/af8278cb/README.md#L28-L73)

## Job System Architecture

## Job System Architecture
The job system in the Blueprint framework consists of three main components:
1. **Jobs**: Encapsulated units of work.
2. **Router**: Routes job calls to appropriate handlers.
3. **Runner**: Executes jobs in a protocol-specific manner.

The router examines the `JobId` of incoming `JobCall` objects and directs them to the appropriate handler. It supports three types of routes:
1. **Exact matches**: Routes that handle specific job IDs.
2. **Fallback routes**: Handle unmatched job calls.
3. **Always routes**: Execute for every job call.

Sources: [Cargo.toml (core)](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/core/Cargo.toml) [Cargo.toml (router)](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/router/Cargo.toml) [Cargo.toml (runner)](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/runner/Cargo.toml)

## Blueprint Manager

## Manager Architecture
The Blueprint Manager monitors events from the Tangle network and manages the lifecycle of blueprint services. It handles fetching, spawning, and executing blueprints in response to network events.

Event Flow Source Handlers Blueprint Manager Monitor Trigger Notify Fetch Blueprint Spawn Blueprint Manager Event Monitor Source Handler GitHub Source Container Source Local Source Network Events Blueprint Runner


## Protocol Integration Architecture
The Blueprint framework integrates with multiple blockchain protocols through specialized client implementations, allowing developers to build applications that interact with different blockchains.

## Runner Adapters and Protocol Libraries

### Runner Adapters and Protocol Libraries

- **Tangle Client**: Interacts with the Tangle Network using the `tangle-subxt` library.
- **EVM Client**: Connects to EVM-compatible blockchains using Alloy libraries.
- **Eigenlayer Client**: Interfaces with Eigenlayer services using the `eigensdk` library.

#### Libraries
- `blueprint-client-core`
- `blueprint-client-tangle`
- `blueprint-client-evm`
- `blueprint-client-eigenlayer`
- `tangle-subxt`
- `alloy libraries`
- `eigensdk`

#### Adapters
- Tangle Adapter
- EVM Adapter
- Eigenlayer Adapter

Sources:
- [Tangle Client Cargo.toml](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/clients/tangle/Cargo.toml#L1-L48)
- [EVM Client Cargo.toml](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/clients/evm/Cargo.toml#L1-L64)
- [Eigenlayer Client Cargo.toml](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/clients/eigenlayer/Cargo.toml#L1-L66)

## Networking Architecture

## Networking Architecture
The Blueprint framework includes a robust networking layer built on libp2p, providing peer-to-peer communication capabilities for decentralized applications.

libp2p Components

  • Behaviours
  • Networking Layer
  • Extensions
  • Network Extensions
  • Aggregated Signature Gossip
  • Round-Based Protocol
  • Network Service
  • Gadget Behaviour
  • Peer Manager
  • Discovery
  • Blueprint Protocol
  • Ping
  • Gossipsub
  • Kademlia DHT
  • MDNS Discovery

The networking layer includes extensions for specialized use cases:
- **Aggregated Signature Gossip**: For efficient signature aggregation in consensus protocols
- **Round-Based Protocol**: Compatibility layer for round-based MPC protocols

Sources: [GitHub Repository](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/networking/Cargo.toml)

## Keystore Architecture

## Keystore Architecture
The Blueprint framework includes a flexible keystore implementation for secure key management, supporting various key types and storage backends.

### Key Types
- **SR25519**: Used primarily for Tangle/Substrate-based chains
- **ED25519**: General-purpose EdDSA signatures
- **ECDSA**: Used for EVM-compatible blockchains
- **BLS**: Used for threshold signatures and signature aggregation
- **BN254**: Used for Eigenlayer's BLS verification

### Storage Backends
- **In-Memory Storage**
- **File Storage**
- **Remote Storage**

### CLI Architecture
The `cargo-tangle` CLI provides a user-friendly interface for interacting with the Blueprint framework, with commands for creating, deploying, and managing blueprints.

### Key Management CLI
A key management CLI is available for generating, importing, and exporting keys.

Sources: [1-321](https://github.com/tangle-network/blueprint/blob/af8278cb/cli/src/command/keys.rs#L1-L321) [69-77](https://github.com/tangle-network/blueprint/blob/af8278cb/Cargo.toml#L69-L77)

## CLI Command Structure

The CLI includes commands for various operations:

- **Blueprint Commands**: Creating, deploying, and running blueprints
- **Key Management Commands**: Generating, importing, and exporting cryptographic keys

### Commands Overview
- **Create a Blueprint**: Generates a new project from a template.
- **Deploy a Blueprint**: Uploads it to the selected blockchain platform (Tangle Network, Eigenlayer, etc.).

### Code Snippet Example
```bash
# Example commands for CLI
cargo-tangle create <blueprint_name>
cargo-tangle deploy <blueprint_name>
cargo-tangle run <blueprint_name>
cargo-tangle list
cargo-tangle generate <key_name>
cargo-tangle import <key_file>
cargo-tangle export <key_name>

Sources:

Testing Infrastructure

Testing Infrastructure

The Blueprint framework includes comprehensive testing utilities for unit testing, integration testing, and end-to-end testing of blueprints.

Chain Setup

  • Test Environments:
    • Tangle Testnet
    • Anvil Testnet
    • Eigenlayer Testnet

Testing Utilities

  • Core Testing Utilities: Common functionality used by all protocol-specific utilities.
  • Protocol-Specific Utilities: Specialized tools for:
    • Tangle
    • Anvil (EVM)
    • Eigenlayer

Setup Tools

  • blueprint-chain-setup
    • Tangle Setup
    • Anvil Setup

Context System

Contexts System

The Blueprint framework includes a context system that provides protocol-specific functionality to blueprint applications through extension traits and types.

Key Components

  • Feature Contexts
  • Protocol Contexts
  • Contexts
  • Generate Implementations
  • blueprint-contexts
  • blueprint-context-derive
  • Tangle Context
  • EVM Context
  • Eigenlayer Context
  • Networking Context
  • Keystore Context

The context system allows extending blueprint applications with protocol-specific functionality without tightly coupling the application code to a specific protocol implementation.

Cross-Protocol Execution Flow

When a blueprint is executed, the system follows a specific flow from the CLI through the various components to the target blockchain.

Execution Flow

Execution Flow Overview

  1. User Interface
  2. Deploy/Run
  3. Fetch Source
  4. Provide Blueprint
  5. Initialize
  6. Route Jobs
  7. Execute
  8. Protocol-Specific Execution
  9. Interact (multiple instances)

Components

  • cargo-tangle CLI
  • Blueprint Manager
  • Blueprint Sources
  • Blueprint Runner
  • Job Router
  • Job Handlers
  • Job Results
  • Protocol Adapters
  • Tangle Execution
  • EVM Execution
  • Eigenlayer Execution
  • Tangle Network
  • EVM Networks
  • Eigenlayer

Flow Illustration

This flow illustrates how a blueprint moves from deployment or execution command through the system to interact with the target blockchain, with protocol adapters handling the specifics of each blockchain platform.

Sources

Troubleshooting and FAQs

  • Issue: Syntax error in textmermaid version 11.6.0.
  • Action: Ask Devin about the tangle-network/blueprint for further insights.

Documentation

Introduction to the Tangle Network Blueprint Framework

No significant content available.

Getting Started

Getting Started with Tangle Network Blueprint

Protocol Support

Protocol Support

Supported Protocols

The Blueprint framework currently supports the following blockchain protocols:

  • Tangle Network
  • EVM-compatible Chains
  • Eigenlayer

Protocol Feature Flags

The Blueprint SDK uses Rust's feature flags system to control which protocol modules are included in your application:

Protocol Feature Flag Description
Tangle Network tangle Enables Tangle Network support
EVM evm Enables EVM-compatible chain support
Eigenlayer eigenlayer Enables Eigenlayer support (implies evm)

Relevant Source Files

Tangle Network Protocol for Deployment

Tangle Network

The Tangle Network is the primary protocol for blueprint deployment. Users can:

  • Deploy blueprint code on-chain
  • Register as operators for blueprints
  • Request services from blueprints
  • Submit jobs to blueprint services
  • Automate event handling with the Blueprint Manager

EVM Support

The Blueprint framework provides integration with any EVM-compatible blockchain, including:

  • Ethereum
  • Layer 2 solutions
  • Local Anvil instances for development

EVM support enables blueprint applications to interact with smart contracts, send transactions, and listen for events on these networks.

Eigenlayer Support in the Blueprint SDK

Eigenlayer Support

Eigenlayer support builds on EVM capabilities and adds specialized features for Actively Validated Services (AVS):

  • BLS signature aggregation for efficient consensus
  • ECDSA signature verification
  • Operator state management
  • Delegation and staking capabilities

Protocol Integration Architecture

The Blueprint SDK implements protocol support through feature flags and a modular architecture:

Protocol Integration Architecture
Core Components
Protocol Clients
Protocol Extensions
blueprint-client-tangle
blueprint-client-evm
blueprint-client-eigenlayer
blueprint-tangle-extra
blueprint-evm-extra
blueprint-eigenlayer-extra
Testing Utilities
blueprint-tangle-testing-utils
blueprint-anvil-testing-utils
blueprint-eigenlayer-testing-utils

Eigenlayer Client

The Eigenlayer client (blueprint-client-eigenlayer) extends EVM capabilities with:

  • AVS contract interactions
  • BLS signature aggregation
  • Operator registry management
  • Delegation and staking operations

The Eigenlayer client uses the eigensdk for integration with the Eigenlayer protocol.

Key Management Across Protocols

The Blueprint framework includes a unified keystore interface that supports multiple key types:

Cross-Protocol Key Management
blueprint-keystore
Key Types
SR25519 (Tangle)
ED25519
ECDSA (Tangle/EVM)
BLS381
BLS377
BN254 (Eigenlayer)
Storage Backends
File Storage
In-Memory Storage

Eigenlayer Protocol Settings

For Eigenlayer, multiple contract addresses are required:

ALLOCATION_MANAGER_ADDRESS=<0x...>
REGISTRY_COORDINATOR_ADDRESS=<0x...>
OPERATOR_STATE_RETRIEVER_ADDRESS=<0x...>
DELEGATION_MANAGER_ADDRESS=<0x...>
SERVICE_MANAGER_ADDRESS=<0x...>
STAKE_REGISTRY_ADDRESS=<0x...>
STRATEGY_MANAGER_ADDRESS=<0x...>
STRATEGY_ADDRESS=<0x...>
AVS_DIRECTORY_ADDRESS=<0x...>
REWARDS_COORDINATOR_ADDRESS=<0x...>
PERMISSION_CONTROLLER_ADDRESS=<0x...>

Testing Utilities for Protocols

The Blueprint framework provides testing utilities for each supported protocol to facilitate local development and testing.

Client Implementations

Tangle Client

The Tangle client (blueprint-client-tangle) provides:

  • Blueprint deployment and management
  • Service registration and requests
  • Job submission
  • Event monitoring via WebSocket subscriptions

Implementation details: Link

EVM Client

The EVM client (blueprint-client-evm) provides:

  • Contract deployment and interaction
  • Transaction submission and monitoring
  • Event filtering and subscription
  • Block and transaction data retrieval

The EVM client uses the Alloy libraries for type-safe interactions with EVM chains.
Implementation details: Link

Key Management Utilities

Key management utilities are provided through the CLI:

cargo tangle key generate --key-type <KEY_TYPE>
cargo tangle key import --key-type <KEY_TYPE> --keystore-path <PATH> --protocol <PROTOCOL>
cargo tangle key list --keystore-path <PATH>

Protocol-specific key considerations:

  • Tangle: Primarily uses SR25519 keys for account operations and ECDSA for certain compatibility scenarios.
  • EVM: Uses ECDSA (secp256k1) keys.
  • Eigenlayer: Uses both ECDSA keys and BLS keys (BN254) for signature aggregation.

Configuration Structure

Each protocol requires specific configuration settings, which can be provided through settings files or environment variables.

Configuration of Protocol Settings

Protocol Settings Configuration

Common Settings

  • HTTP RPC URL
  • WebSocket RPC URL
  • Keystore URI
  • Data Directory

ProtocolSettings

TangleProtocolSettings
  • Blueprint ID
  • Service ID (Optional)
EigenlayerProtocolSettings
  • Registry Coordinator Address
  • Operator State Retriever Address
  • Delegation Manager Address
  • Service Manager Address
  • etc.

Tangle Protocol Settings

For Tangle, the following settings are required:

BLUEPRINT_ID=<Blueprint ID>
SERVICE_ID=<Service ID> # Optional

Sources: 737-821

Testing Utilities

Protocol-Specific Testing Utilities

Protocol Crate Features
Tangle blueprint-tangle-testing-utils Local testnet, blueprint deployment, job execution
EVM blueprint-anvil-testing-utils Anvil integration, contract deployment
Eigenlayer blueprint-eigenlayer-testing-utils AVS testing, BLS aggregation testing

CLI Support for Multiple Protocols

The CLI provides commands for working with different protocols through the blueprint deploy and blueprint run commands.

Deployment Commands

# Example command for deployment
blueprint deploy <options>

Deployment and Execution Commands

Protocol Deployment Commands

Tangle Deployment Options:

  • HTTP RPC URL
  • WebSocket RPC URL
  • Package name
  • Local devnet option
  • Keystore path

Eigenlayer Deployment Options:

  • RPC URL
  • Contracts path
  • Network (local, testnet, mainnet)
  • Local devnet option
  • Keystore path

Protocol Run Commands

The blueprint run command supports both Tangle and Eigenlayer protocols:

cargo tangle blueprint run --protocol <PROTOCOL> --rpc-url <URL> --settings-file <PATH>

When running with the Eigenlayer protocol, the command compiles and executes an AVS binary with appropriate configuration.

Creating Protocol-Specific Blueprints

Creating Protocol-Specific Blueprints

The Blueprint framework allows creating protocol-specific blueprints using templates:

cargo tangle blueprint create --name <NAME> [--blueprint-type <TYPE>]

Available blueprint types:

  • tangle: Standard Tangle blueprint
  • eigenlayer-bls: Eigenlayer AVS with BLS signature aggregation
  • eigenlayer-ecdsa: Eigenlayer AVS with ECDSA verification

Sources: 16-93

Context Extensions for Protocol Functionalities

Cross-Protocol Contexts and Extensions

The Blueprint framework provides context extensions that enable blueprints to access protocol-specific functionality:

Protocol Context Extension Features
Tangle TangleContextExtension Access to Tangle client and services
EVM EvmContextExtension Access to EVM clients and contracts
Eigenlayer EigenlayerContextExtension Access to Eigenlayer functionality

Sources: 27-36

Conclusion

Conclusion

The Blueprint framework's protocol support enables developers to build applications that work across multiple blockchain environments. By leveraging the modular architecture and protocol-specific extensions, applications can be built once and deployed to various supported protocols with minimal changes. For information on how to get started with a specific protocol, refer to the Getting Started section.

Documentation

Introduction

No significant content available.

Getting Started

Getting Started with Tangle Network Blueprint

Creating Blueprints

Creating New Examples

Developers can create new examples using the Blueprint CLI:

cargo tangle blueprint create --name my_example

The CLI supports different blueprint types through templates:

  1. Basic Tangle templates
  2. Eigenlayer BLS templates
  3. Eigenlayer ECDSA templates

These templates provide the necessary structure and boilerplate code for different protocols and use cases.

Example Blueprints

Example Blueprints

Relevant Source Files

Incredible Squaring: A Simple Example

The "Incredible Squaring" blueprint demonstrates core concepts of the Blueprint SDK by implementing a service that squares numbers submitted through jobs.

Project Structure

Blueprint SDK Dependencies
Incredible Squaring Blueprint
Workspace
Uses
  - incredible-squaring-bin
Depends on
  - blueprint-router
  - blueprint-runner
  - blueprint-core

Running the Example Blueprints

Examples can be run using the Blueprint CLI, which provides commands for building, deploying, and running blueprints.

Example Blueprint Deployment Flow

# Example CLI commands for deployment
blueprint-cli build
blueprint-cli deploy
blueprint-cli run

These examples provide working code that developers can run, modify, and use as templates for their own projects.

Workspace Structure

The workspace structure separates the core business logic (the library) from the application entry point (the binary), promoting code reuse and testability.

Blueprint Implementation Flow

The following diagram illustrates how jobs flow through the Incredible Squaring blueprint:

"Tangle Consumer" -> "Tangle Producer" -> "Squaring Job" -> "Job Router" -> "Blueprint Runner" -> "Client"
Submit number to square -> Create Job -> Call with number -> Route Job -> Call based on ID -> Execute squaring function
Return squared result -> Return Job Result -> Send Job Result -> Return squared number

Blockchain Protocol Support

Protocol-Specific Examples

The Blueprint SDK supports multiple blockchain protocols, with examples demonstrating how to register as a service provider, handle protocol-specific data, and integrate with different blockchain environments.

Protocol Support Architecture

The example blueprints demonstrate integration with various protocols through a common runner architecture:

Example Implementations
Protocol Configurations
Blueprint Runner
configure
implements
implements
implements
BlueprintRunnerBuilder
FinalizedBlueprintRunner
TangleConfig
EigenlayerBLSConfig
EigenlayerECDSAConfig
BlueprintConfig trait
Incredible Squaring

Sources:

Tangle Protocol Examples

The Blueprint SDK includes examples for the Tangle Network, demonstrating how to:

  1. Register as an operator
  2. Handle service requests
  3. Process jobs on the Tangle Network

Eigenlayer Protocol Examples

Eigenlayer Protocol Examples

For Eigenlayer, the examples demonstrate:

  1. BLS-based operator registration and service execution
  2. ECDSA-based operator registration and service execution
  3. Integration with Eigenlayer contracts and middleware

Blueprint Development Workflow

Blueprint Development Workflow

  1. Create Blueprint

    cargo tangle blueprint create
  2. Build Blueprint

    cargo build
  3. Deploy Blueprint

    cargo tangle blueprint deploy
  4. Run Blueprint

    cargo tangle blueprint run

Key Management for Examples

Example blueprints demonstrate secure key management through the Blueprint SDK's keystore system.

Additional Information

  • Protocol Registration: The deployment process sets up the necessary protocol-specific configuration and registers the blueprint with the selected network.
  • Sources: Documentation Links, Runner Source

Key Management and Authentication

Key Management and Authentication

Generate Keys

To generate keys, use the following command:

cargo tangle blueprint generate-keys

Import Keys

Keys can be imported from existing sources.

Key Types

  • SR25519 Keys: For Tangle
  • ECDSA Keys: For Ethereum/Eigenlayer
  • BLS/BN254 Keys: For BLS Signatures

Authentication

  • Transaction Signing: Essential for verifying transactions.
  • P2P Networking: Important for secure communication between nodes.

Extending the Examples

The example blueprints are designed to be extensible. Developers can:

  1. Modify the job handling logic.
  2. Add custom background services.
  3. Implement additional protocol integrations.
  4. Create custom producers and consumers.

Example Extension: To extend the Incredible Squaring example with a cubing function:

  1. Add a new job handler function.
  2. Register it with a new job ID in the router.
  3. Deploy the updated blueprint.

Sources

Reference Table for Examples

Available Examples Reference Table

The following table summarizes the key examples available in the Blueprint SDK:

Example Name Protocol Main Functionality Key Features
Incredible Squaring Tangle Number squaring Job routing, basic producer/consumer
Incredible Squaring Eigenlayer (BLS) Number squaring BLS signatures, Eigenlayer registration
Incredible Squaring Eigenlayer (ECDSA) Number squaring ECDSA signatures, Eigenlayer registration

Sources:

Consultation and Further Research

  • Ask Devin about tangle-network/blueprint
  • Conduct deep research

Documentation

Introduction

No significant content available.

Architecture Overview

Key Links

Component Interaction

For more details on how these components interact, see:

Source References

Core Components

Introduction to Core Components

Core Components

Relevant Source Files

Overview

The Core Components provide the foundation of the Blueprint SDK, defining the fundamental abstractions and types that power the job-based processing model. This model allows for a unified way to handle requests from various sources, including blockchain events, timers, or network messages.

Summary

The Core Components offer a unified programming model for handling events consistently across different blockchain environments. By abstracting event handling into Jobs, JobCalls, and JobResults, Blueprint enables developers to write protocol-agnostic code deployable across various blockchain networks. Understanding these components is essential for effectively using the Blueprint SDK to build decentralized applications.

Job Processing System

Job Processing System

  • Job Handling:

    • Generates, routed to, matches JobId with, processes using, extract data for, returns value, converted via, produces, consumed by.
    • Components involved:
      • Producer (Event Source)
      • JobCall (with JobId)
      • Router
      • Job Handler
      • Extractors
      • Handler Function
      • Value
      • IntoJobResult
      • JobResult
      • Consumer
  • Code Examples:

    // Job that extracts data from the call
    async fn data_job(body: Bytes) -> String {
        format!("Received: {}", String::from_utf8_lossy(&body))
    }
    
    // Job that uses context
    async fn context_job(Context(ctx): Context<AppContext>) -> String {
        format!("Using context: {}", ctx.name)
    }
  • JobId:

    • JobId is a unique identifier used to route job calls to the appropriate handler. It's a 256-bit (32-byte) identifier that can be created from various types.
    • Methods:
      • from()
      • into()
    • Types supported:
      • u8
      • u32
      • String
      • str

Sources:

Key Abstractions and Traits

Key Abstractions

The core components are built around several key abstractions:

Abstraction Description
Job Trait for async functions that handle job requests
JobId Unique identifier for routing job calls
JobCall Representation of a job call event with metadata
JobResult Result of job execution
IntoJobResult Trait for converting values to JobResult
Extractors Types that obtain data from job calls

Job Trait

The Job trait is the cornerstone of the Blueprint processing model. It represents an async function that can handle job calls and produce results. The Job trait is automatically implemented for async functions with appropriate signatures, allowing them to be used directly as handlers:

// Simple job with no arguments
async fn simple_job() -> String {
    "Hello, world!".to_string()
}

JobCall Structure and Metadata

JobId can be created from:

  • Primitive integers (u8, u16, u32, u64, etc.)
  • Strings (automatically hashed)
  • Byte arrays
  • Other types with specific implementations

Example usage:

// From numeric literals
const MY_JOB_ID: u32 = 1;
let job_id = JobId::from(MY_JOB_ID);

// From strings (hashed internally)
let str_job_id = JobId::from("my-job-name");

JobCall

JobCall represents a job call event that needs to be processed. It contains two main components:

  1. A header (Parts) with job metadata
  2. A body that holds the actual job data

The Parts structure contains:

  • job_id: The identifier for routing
  • metadata: Key-value pairs for job-specific metadata
  • extensions: Storage for request-specific data (similar to middleware)

Handling Void Jobs

A special Void type is provided for jobs that intentionally don't produce a result:

async fn no_result_job() -> Void {
    // Do something but don't return a result
}

Extractors

Extractors are types that extract data from JobCalls, making it easy to access specific parts of the request in job handlers. The framework provides two main extractor traits:

«trait»
FromJobCall<Ctx, M>
+ Rejection associated_type
+ from_job_call(JobCall, &Ctx) : Result

«trait»
FromJobCallParts<Ctx>
+ Rejection associated_type
+ from_job_call_parts(Parts, &Ctx) : Result

Context

  • S value Metadata
  • MetadataMap value Bytes

Sources:

Creating Job Calls

Creating Job Calls

Example creating a job call:

// Create a job call with an integer ID and string body
let call = JobCall::new(123, "job data");

// Create a job call with a string ID (hashed internally)
let call = JobCall::new("my-job", Bytes::from("job data"));

JobResult

JobResult represents the outcome of job execution. It can be either:

  • Ok containing a successful result with a body and metadata
  • Err containing an error

The IntoJobResult trait allows various types to be automatically converted to JobResult, making it convenient to return different types from job handlers:

// String is converted to JobResult
async fn string_job() -> String {
    "Hello, world!".to_string()
}

// Result is converted to JobResult
async fn fallible_job() -> Result<String, MyError> {
    Ok("Success".to_string()) // or Err(MyError::Failure)
}

Error Handling

Any error type that implements std::error::Error + Send + Sync + 'static can be returned from a job handler:

// Custom error type
#[derive(Debug)]
enum MyError {
    InvalidInput,
    ProcessingFailed,
}

impl std::fmt::Display for MyError {
    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
        // Implementation
    }
}

impl std::error::Error for MyError {}

// Job that can return an error
async fn fallible_job() -> Result<String, MyError> {
    if true {
        Ok("Success".to_string())
    } else {
        Err(MyError::InvalidInput)
    }
}

The IntoJobResult trait implementation for Result handles the conversion to JobResult::Ok or JobResult::Err automatically.

Integration with Other Components

The core components integrate with other parts of the Blueprint system:


## Protocol Integrations

### Core Components of Protocol Integrations

- **Router**: Routes `JobCall`s to the appropriate job handlers.
- **Runner**: Executes and manages jobs.
- **Clients**: 
  - Generate `JobCall`s from blockchain events.
  - Process `JobResult`s.
- **Networking**: Transfers `JobCall`s and `JobResult`s between nodes.
- **Keystore**: Provides cryptographic services for job processing.

### Code Snippets
```plaintext
* **Router** : Uses core components to route `JobCall`s to the appropriate job handlers
* **Runner** : Uses core components to execute and manage jobs
* **Clients** : Generate `JobCall`s from blockchain events and process `JobResult`s
* **Networking** : Can transfer `JobCall`s and `JobResult`s between nodes
* **Keystore** : Provides cryptographic services for job processing

Common Extractors

Common Extractors

  1. Context<T>: Provides access to application context

    async fn handler(Context(ctx): Context<AppState>) {
        // Use ctx
    }
  2. Metadata: Extracts metadata from the job call

    async fn handler(Metadata(metadata): Metadata) {
        // Access metadata
    }
  3. Bytes: Extracts the raw body bytes

    async fn handler(body: Bytes) {
        // Process raw bytes
    }

Error Handling

Blueprint's core components provide flexible error handling through the Result type and automatic conversion to JobResult:

// Example of error handling
returns
converted via
produces
Job Handler
Result
IntoJobResult
JobResult::Ok or JobResult::Err

Sources

Troubleshooting

Troubleshooting Syntax Errors

  • Issue: Syntax error in textmermaid version 11.6.0 (repeated multiple times).
  • Recommendation: Ask Devin about tangle-network/blueprint for further insights.

Documentation

Introduction to Tangle Network's Blueprint Framework

Overview

Relevant source files:

Getting Started with Installation

Getting Started

To start using the Blueprint framework, you first need to install the cargo-tangle CLI tool:

curl --proto '=https' --tlsv1.2 -LsSf https://github.com/tangle-network/gadget/releases/download/cargo-tangle/v0.1.1-beta.7/cargo-tangle-installer.sh | sh

or install from source:

cargo install cargo-tangle --git https://github.com/tangle-network/gadget --force

Once installed, you can create your first blueprint:

# Create a new blueprint named "my_blueprint"
cargo tangle blueprint create --name my_blueprint

# Navigate into the blueprint directory and build
cd my_blueprint
cargo build

# Deploy your blueprint to the Tangle Network
cargo tangle blueprint deploy --rpc-url wss://rpc.tangle.tools --package my_blueprint

For more detailed information on how to get started, see Getting Started.

Understanding the Architecture

Framework Architecture

Blueprint's architecture is built around a modular, extensible core system with specialized components for different blockchain environments.

Core Components

  • CLI (cargo-tangle): Command-line interface for creating, managing, and deploying blueprints.
  • Blueprint SDK: Core toolkit with various components for different functionalities.
  • Blueprint Runner: Executes blueprint operations in a protocol-specific manner.
  • Blueprint Manager: Orchestrates blueprint lifecycle, handling events and sources.
  • Protocol Integrations: Specialized clients for interacting with different blockchain networks.

Additional Resources

For more detailed information about specific aspects of the framework, see the related documentation pages on:

On this page

Core Components of the Blueprint Framework

Key SDK Components

  • Core: Fundamental abstractions for the job system and blueprint building blocks
  • Router: Directs job requests to appropriate handlers
  • Crypto: Multi-chain cryptographic primitives for various key types and signature schemes
  • Clients: Protocol-specific clients for blockchain interactions
  • Networking: P2P networking capabilities based on libp2p
  • Keystore: Secure key management for multiple key types

Job and Router System

At the heart of the Blueprint framework is the job system, which provides a unified way to handle various tasks across different protocols.

Job Execution Flow
Route Match
No Match
Always Route
JobCall (with JobId)
Router
Job Handler
Fallback Handler
Always Handler
JobResult

Blueprint Manager and Runner

  • Blueprint Manager:
    • Monitors events from blockchain networks
    • Manages the lifecycle of blueprint services
    • Fetches and spawns blueprints based on events
  • Blueprint Runner:
    • Configures the execution environment
    • Coordinates job execution
    • Provides protocol-specific runtime capabilities

Protocol Support

The Blueprint framework supports multiple blockchain protocols through specialized client implementations and protocol-specific extensions.

Deployment
Protocol Clients
Deploy Tangle
Deploy EVM
Deploy Eigenlayer
Protocol Extensions
blueprint-tangle-extra
blueprint-evm-extra
blueprint-eigenlayer-extra
blueprint-clients
blueprint-client-tangle
blueprint-client-evm
blueprint-client-eigenlayer
cargo-tangle CLI

Summary

The Blueprint framework provides a comprehensive set of tools for building decentralized applications across multiple blockchain networks. Key advantages include:

  • Protocol Agnostic: Write once, deploy to multiple blockchain environments
  • Modular Design: Use only the components you need
  • Extensible: Add custom functionality through middleware and extensions
  • Secure: Strong cryptographic foundations and key management
  • Networked: Built-in P2P networking capabilities

Blueprint SDK Overview

SDK Components

The Blueprint SDK is highly modular, consisting of multiple crates that can be used independently:

Blueprint SDK
blueprint-core
blueprint-router
blueprint-runner
blueprint-crypto
blueprint-keystore
blueprint-clients
blueprint-networking
blueprint-stores
crypto-core
crypto-k256
crypto-sr25519
crypto-ed25519
crypto-bls
crypto-bn254
client-core
client-tangle
client-evm
client-eigenlayer

Job System Architecture

The job system consists of:

  • JobCall: A request to execute a specific job with associated data
  • JobId: A unique identifier for jobs
  • Router: Examines incoming JobCalls and directs them to appropriate handlers
  • Handlers: Functions that execute the logic for specific jobs
  • JobResult: The result returned after job execution

This design allows for flexible extensibility and composition of behaviors through middleware and routing configurations.

Blueprint Manager and Runner

The Blueprint Manager and Runner work together to handle the lifecycle of blueprints from deployment to execution.

Event Flow
Blueprint Runner
Source Handlers
Blueprint Manager
Configure
Fetch & Spawn
Execute
Event Handler
Blueprint Manager
Blueprint Source Handler
GitHub Source
Container Source
Test Source
Blueprint Runner Builder
Finalized Runner
Blockchain Events
Job Execution

Blockchain Protocol Integration

The Blueprint framework is a toolkit for building, deploying, and managing decentralized applications (dApps) across multiple blockchain environments. It supports multiple blockchain protocols, including Tangle Network, Eigenlayer, and EVM-compatible chains, enabling seamless application operation across these environments. For detailed protocol integrations, see Protocol Support.

Networking and Cryptography

Protocol Integration

Each protocol integration includes:

  • Client Library: For interacting with the blockchain
  • Protocol Extensions: Additional functionality specific to that protocol
  • Configuration: Protocol-specific settings and options
  • Deployment Tools: CLI commands for deploying to that environment

Supported Protocols

  • Tangle Network: Native support for the Tangle blockchain
  • EVM-compatible chains: Support for Ethereum and other EVM chains
  • Eigenlayer: Support for Eigenlayer's consensus mechanisms

Key Management

  • Keystore
  • Local Signer
  • Remote Signer
  • Hardware Signer

Cryptography

  • Crypto Modules:
    • K256 (secp256k1)
    • SR25519
    • ED25519
    • BLS
    • BN254

Networking

The networking layer is built on libp2p and provides:

  • Peer-to-peer communication: Direct communication between nodes
  • Discovery: Finding and connecting to peers
  • Extensions: Support for specialized protocols like aggregated signatures and round-based protocols

Networking Components

  • Network Service
  • Gadget Behaviour
  • Peer Manager

Key Management and Security

Cryptography

The cryptography modules support multiple signature schemes and key types:

  • K256 (secp256k1): Used for Ethereum and other EVM chains
  • SR25519: Used for Tangle and other Substrate-based blockchains
  • ED25519: General-purpose EdDSA implementation
  • BLS: Boneh-Lynn-Shacham signatures for aggregated signing
  • BN254: Barreto-Naehrig curves for zero-knowledge proofs

Keystore

The keystore system provides:

  • Secure key storage: Safe management of cryptographic keys
  • Multiple backends: File-based, in-memory, and remote options
  • Signature generation: Protocol-specific signing capabilities

Consultation and Research Notes

  • Ask Devin about tangle-network/blueprint
  • Deep Research

Documentation

Introduction to the Tangle Network's Blueprint Framework

No relevant content available.

Getting Started

Getting Started with Tangle Network Blueprint

Overview of the Blueprint Framework

Blueprints Overview

Blueprints are Infrastructure-as-Code templates that allow developers to build crypto services quickly. They provide the structure and behavior for applications running on various blockchain protocols.

Protocol Support

  • EVM Blockchains
  • Tangle Network
  • Eigenlayer

Blueprint Concepts

  • creates
  • executes
  • routed by
  • deploys to
  • Service Instance
  • Jobs
  • Router

Key Links

Core Components

Key Concepts

Relevant source files:

Job Processing Architecture

A blueprint defines the behavior of a service that processes jobs through a router, deployable to various blockchain environments like Tangle Network, Eigenlayer, and EVM-compatible chains.

Job System

The job system is the core mechanism for handling work within a blueprint. Jobs are async functions that process requests and produce results.

Job Call Structure
Job Components
contains
routed by
executes job
produces
contains
contains
contains
consists of
consists of
JobCall
JobId
JobResult
Router
Job Handler
Parts (Header)
Body (Arguments)
Metadata
Extensions
JobCall

The Router inspects the JobId of incoming JobCall objects and routes them to the appropriate handler. It supports:

  • Exact matches: Routes a job call to a specific handler based on its JobId
  • Fallback routes: Handles job calls when no specific handler exists for a JobId
  • Always routes: Executes for every job call, regardless of JobId
  • Middleware: Applies transformations or validations to job calls before they reach handlers

Blueprint Runner

The Blueprint Runner orchestrates the execution of jobs and manages the lifecycle of a blueprint service.

JobCall Structure

JobCall

A JobCall is a request to execute a specific job. It consists of:

  • Header (Parts): Contains the JobId and metadata about the call.
  • Body: Contains the job arguments in a format specific to the producer.

JobId

A JobId is a unique identifier for each job registered with the system. It can be created from various primitive types like integers, strings, or byte arrays.

Job Handlers

Job handlers are async functions that implement the Job trait. They process JobCall requests and return JobResult objects.

// Simple job handler example
async fn job_handler() -> String {
    "Hello, World!".to_string()
}

JobResult and Routing Mechanisms

JobResult

A JobResult represents the outcome of a job execution. It can be either a success (Ok) with a body and metadata, or an error (Err).

Router System

The Router is responsible for directing job calls to the appropriate handlers based on the JobId.

Router Configuration
router.route(job_id, job)
router.fallback(job)
router.always(job)
router.layer(middleware)

Route Match

  • No Match
  • Always Route
  • JobCall (with JobId)

Handlers

  • Job Handler
  • Fallback Handler
  • Always Handler

JobResult

Blueprint Runner and Job Management

Blueprint Runner Overview

The Blueprint Runner:

  1. Receives job calls from producers
  2. Routes them to appropriate handlers using the Router
  3. Collects the results and sends them to consumers
  4. Manages background services that run alongside the job processing flow

Producers and Consumers

  • Producers: Generate JobCall objects that trigger job execution
  • Consumers: Receive JobResult objects produced by completed jobs

Sources

Background Services in the Blueprint Framework

Background Services

Background services run alongside the job processing pipeline and perform ongoing tasks. They implement the BackgroundService trait.

Protocol Support

The Blueprint framework supports multiple blockchain protocols through specialized protocol settings and configurations.

Protocol Environment Setup

Protocol Types

  • Tangle Network: Substrate-based protocol supporting WebAssembly smart contracts.
  • Eigenlayer: EVM-based protocol with specialized cryptographic requirements (BLS, ECDSA).
  • EVM: General Ethereum Virtual Machine support.

Protocol Configurations

  • BlueprintConfig Trait
  • TangleConfig
  • EigenlayerBLSConfig
  • EigenlayerECDSAConfig
  • ProtocolSettings
  • TangleProtocolSettings
  • EigenlayerProtocolSettings
  • BlueprintEnvironment

Configurations for Blockchain Protocols

For each protocol, there are specialized settings and configurations that determine how blueprints interact with the blockchain.

Blueprint Environment

The BlueprintEnvironment describes the context in which a blueprint runs, including:

  • RPC Endpoints
  • Keystore Information
  • Protocol-Specific Settings

Protocol Settings Structure

BlueprintEnvironment
  ├── http_rpc_endpoint
  ├── ws_rpc_endpoint
  ├── keystore_uri
  ├── data_dir
  └── protocol_settings
      ├── networking settings
      ├── ProtocolSettings
      ├── TangleProtocolSettings
      └── EigenlayerProtocolSettings

Sources:

Keystore System and Key Management

The environment provides all necessary information for a blueprint to connect to blockchain networks, handle cryptographic operations, and access required resources.

Keystore System

The Keystore manages cryptographic keys used by blueprints for various operations like signing transactions and verifying messages.

Keystore System
supports
uses
includes
Keystore
Key Types
Storage Backends
SR25519
ED25519
K256ECDSA
BLS
File Storage
In-Memory Storage

Different protocols require different key types:

  • Tangle Network : SR25519, ECDSA
  • Eigenlayer : ECDSA (secp256k1), BLS (BN254)
  • EVM : ECDSA (secp256k1)

Data Extraction with Extractors

Extractors

Extractors are components that extract data from job calls to be used as arguments in job handlers.

Extractor System

  • JobCall
  • FromJobCall trait
  • FromJobCallParts trait
  • Body Type Extractors
  • Metadata Extractors
  • Context Extractors

Common Extractors

  • Context: Access to the global context of the blueprint
  • Metadata: Access to metadata attached to a job call
  • Body: Access to the job call's body in various formats

Sources

Implementing Blueprints in the Framework

Blueprints in Practice

When implementing a blueprint, developers typically:

  1. Define job handlers for specific tasks
  2. Configure a router to map job IDs to handlers
  3. Set up producers to generate job calls
  4. Configure consumers to process job results
  5. Implement protocol-specific registration if needed
Runtime Execution
generate
processed by
dispatches to
produce
sent to
Producers
Job Calls
Router
Job Handlers
Job Results
Consumers
Blueprint Implementation Flow
next
next
next
next
next
Define Job Handlers
Configure Router
Setup Producers
Setup Consumers
Register with Protocol
Run Blueprint

Blueprint CLI Reference

The Blueprint CLI (cargo-tangle) provides commands for creating, building, deploying, and managing blueprints on supported protocols.

Registration

Blueprints need to be registered with their target protocol before they can be used. The registration process varies by protocol:

  • Tangle Network: Register as an operator for a specific blueprint ID
  • Eigenlayer BLS: Register with BLS cryptographic verification
  • Eigenlayer ECDSA: Register with ECDSA cryptographic verification

Error Handling in the Blueprint Framework

Error Handling

The Blueprint framework provides comprehensive error handling through a hierarchical error system:

Error Hierarchy
- RunnerError
- Keystore Error
- Networking Error
- Config Error
- JobCall Error
- Producer Error
- Consumer Error
- Protocol-Specific Errors
- TangleError
- EigenlayerError

This error system helps developers identify and handle issues at various levels of the blueprint execution flow.

Documentation

Getting Started

Getting Started

Prerequisites

  • Ensure you have the necessary dependencies installed.

Platform-specific Installation Commands

  • Follow the commands specific to your platform for installation.

Installing the Blueprint CLI

Option 1: Install Script (Recommended)

curl -sSL https://example.com/install.sh | bash

Option 2: Install from Source

git clone https://github.com/tangle-network/blueprint.git
cd blueprint
cargo build --release

Development Environment Flow

  • Set up your development environment according to the guidelines.

Creating Your First Blueprint

  • Follow the instructions to create your first blueprint.

Building and Testing

cargo test

Deploying Your Blueprint

  • Use the following command to deploy your blueprint.
cargo run --release -- deploy

Key Management

Generating Keys

blueprint keys generate

Environment Variables

  • Set the required environment variables as specified in the documentation.

Interacting with Deployed Blueprints

  • Use the CLI commands to interact with your deployed blueprints.

Nix Development Environment

  • Instructions for setting up a Nix-based development environment.

Next Steps

  • Explore advanced topics and additional features in the documentation.

Installation Requirements

Installation Requirements

Required Dependencies

  • Rust Toolchain (1.86+)
  • OpenSSL Development Packages
  • Build Essentials
  • pkg-config
  • Foundry (for EVM development)
  • Protocol Buffers (for networking extensions)
  • Node.js (for contract testing)

Platform-specific Installation Commands

For Ubuntu/Debian:

sudo apt update && sudo apt install build-essential cmake libssl-dev pkg-config

For macOS:

brew install openssl cmake

For Rust (all platforms):

curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

Installing the Blueprint CLI

The Blueprint CLI (cargo-tangle) can be installed using two options:

Option 1: Install Script (Recommended)

curl --proto '=https' --tlsv1.2 -LsSf https://github.com/tangle-network/gadget/releases/download/cargo-tangle/v0.1.1-beta.7/cargo-tangle-installer.sh | sh

Option 2: Install from Source

cargo install cargo-tangle --git https://github.com/tangle-network/gadget --force

Verify the installation:

cargo tangle --version

Setting Up the Blueprint Environment

Prerequisites

Ensure you have the following prerequisites installed on your system:

Environment Variables

You can configure certain aspects of the Blueprint environment using environment variables:

Variable Description Example
SIGNER SURI of the Substrate signer account export SIGNER="//Alice"
EVM_SIGNER Private key of the EVM signer account export EVM_SIGNER="0xcb6df9..."

These environment variables can be used instead of a keystore for deployment operations.

Nix Environment

The Nix environment includes all required dependencies including Rust toolchain, OpenSSL, and Foundry.

Next Steps

  1. Learn about the detailed installation options.
  2. Follow the step-by-step guide to creating your first blueprint.
  3. Explore the example blueprints.
  4. Dive deeper into the Blueprint SDK components.

Creating Blueprints

CLI Commands

  • cargo tangle
  • blueprint create
  • cargo build
  • cargo test
  • blueprint deploy
  • blueprint run
  • blueprint list-blueprints
  • blueprint request-service

Creating Your First Blueprint

After installing the CLI, you can create a new blueprint project:

# Create a new blueprint named "my_blueprint"
cargo tangle blueprint create --name my_blueprint

# Navigate into the blueprint directory
cd my_blueprint

The blueprint create command generates a new project from a template repository. By default, it creates a Tangle Network blueprint, but you can specify other blueprint types using the --type flag.

The command uses cargo-generate with pre-defined templates based on your chosen blueprint type:

Blueprint Type Template Repository
Tangle (default) https://github.com/tangle-network/blueprint-template
Eigenlayer BLS https://github.com/tangle-network/eigenlayer-bls-template
Eigenlayer ECDSA https://github.com/tangle-network/eigenlayer-ecdsa-template

Building and Testing

Once your blueprint is created, you can build and test it:

# Build the blueprint
cargo build

# Run the tests
cargo test

Key Management and Cryptography

Key Management

Blueprint provides a robust key management system through its keystore functionality. The keystore securely manages various types of cryptographic keys used in different protocols.

Keystore
+new(config: KeystoreConfig) : Result<Keystore>
+generate<T: KeyType>(seed: Option<&[u8]>) : Result<T::Public>
+insert<T: KeyType>(secret: &T::Secret) : Result<()>
+sign_with_local<T: KeyType>(public: &T::Public, msg: &[u8]) : Result<T::Signature>
+list_local<T: KeyType>() : Result<Vec<T::Public>>
+get_secret<T: KeyType>(public: &T::Public) : Result<T::Secret>

Generating Keys

You can generate keys using the CLI:

cargo tangle blueprint generate-keys -k <KEY_TYPE> -p <PATH> -s <SEED> --show-secret

Where:

  • KEY_TYPE: The key type to generate (sr25519, ecdsa, bls_bn254, ed25519, bls381)
  • PATH: Path to store the generated keypair (optional)
  • SEED: Seed to use for generation (optional)
  • --show-secret: Display the private key (optional)

Building and Deploying Blueprints

The build process compiles your Rust code and any smart contracts included in your project. If the project includes Foundry-based contracts, they will be compiled as part of the build process.

Deploying Your Blueprint

After building your blueprint, you can deploy it to a blockchain network:

# Deploy to the Tangle Network
cargo tangle blueprint deploy --rpc-url wss://rpc.tangle.tools --package my_blueprint

# Or deploy to a local testnet for development
cargo tangle blueprint deploy tangle --devnet --package my_blueprint

The deployment process includes:

  1. Connecting to the specified network
  2. Using a keystore to sign transactions
  3. Deploying the blueprint to the network
  4. Returning information about the deployed blueprint

Managing Deployed Blueprints

Interacting with Deployed Blueprints

After deployment, you can interact with your blueprints using various CLI commands:

Command Description
blueprint list-blueprints List available blueprints
blueprint list-requests List service requests
blueprint register Register as a provider
blueprint request-service Request a service
blueprint accept Accept a service request
blueprint reject Reject a service request
blueprint run Run a blueprint
blueprint submit Submit a job to a blueprint

Nix Development Environment

For reproducible development environments, Blueprint provides a Nix flake configuration:

nix develop

Consultation and Further Research

  • Ask Devin about tangle-network/blueprint
  • Conduct deep research

Documentation

Getting Started with Installation

Getting Started with Installation

Core Components of the Job System

Core Concepts

Job System

Core Concepts

The Job System is built around several key abstractions that work together to process and execute tasks:

JobId

JobId is a unique identifier for each job, allowing the router to direct job calls to the appropriate handler.

JobResult

A JobResult represents the outcome of a job execution, which can be either a success (Ok) with a body and metadata, or an error (Err).

Job System Flow

The job system processes jobs through a series of steps from receipt to completion:

Consumer -> Job Handler -> Router -> BlueprintRunner -> Producer

Example Job Implementation

Jobs can be implemented as simple async functions:

// A job that immediately returns a string
async fn hello_job() -> String {
    "Hello, World!".to_string()
}

// A job that processes input data
async fn echo_job(body: Bytes) -> Result<String, String> {
    String::from_utf8(body.to_vec()).map_err(|_| "Invalid UTF-8".to_string())
}

// A job that uses context
async fn context_job(Context(ctx): Context<AppState>) -> String {
    format!("Using context: {}", ctx.value)
}

Data Extraction

The job system provides extractors to obtain data from job calls, making it easier to work with job inputs. The IntoJobResult trait provides automatic conversion for many common types:

  • String, &str - Converted to Bytes
  • Bytes, BytesMut - Used directly
  • () (unit) - Converted to empty Bytes
  • Result<T, E> - Success converted via IntoJobResult, error converted to BoxError
  • Void - Produces no result (returns None)

Integration with Runner System

The Job System is a core part of the Blueprint Runner, orchestrating the entire execution flow.

Protocol Support and Configuration

Job System Support for Protocols

The job system supports different protocols through specific configurations and registration mechanisms.

Key Components

  • Protocol Support
    • register()
    • BlueprintConfig Trait
    • Registration Logic
    • TangleConfig
    • EigenlayerConfig

Each protocol implements the BlueprintConfig trait, which provides:

  1. Registration logic for the protocol
  2. Checking if registration is required
  3. Determining if the runner should exit after registration

Sources

  • [crates/runner/src/tangle/config.rs]
  • [crates/runner/src/eigenlayer/config.rs]
  • [crates/runner/src/eigenlayer/bls.rs]
  • [crates/runner/src/eigenlayer/ecdsa.rs]

Error Handling

The job system includes comprehensive error handling for different components.

JobId and JobCall Structures

JobId

  • JobId:
    • [u64; 4]
    • ZERO: JobId
    • MIN: JobId
    • MAX: JobId
    • from(value) : JobId
    • into() : T

Convertible from/to:

  • Numeric types: u8, u16, u32, u64, u128, i8, i16, i32, i64, i128, usize, isize
  • Byte arrays: [u8; 32]
  • Strings: &str, String
  • Byte slices: &[u8], Vec

JobCall

  • JobCall:
    • Parts head
    • T body
    • new(job_id, body) : JobCall
    • from_parts(parts, body) : JobCall
    • job_id() : JobId
    • metadata() : MetadataMap
    • extensions() : Extensions
    • into_parts() : (Parts, T)
    • map(F) : JobCall<U>

Parts:

  • JobId job_id
  • MetadataMap metadata
  • Extensions extensions
  • new(job_id) : Parts

Job Execution Process

Job Execution Process

  1. Producers generate JobCall events, which contain a JobId and data.
  2. The Runner receives these calls and passes them to the Router.
  3. The Router matches the JobId to the appropriate job handler.
  4. The Job Handler processes the call and returns a JobResult.
  5. The Runner collects the results and distributes them to all registered Consumers.

Blueprint Runner

The BlueprintRunner orchestrates the job execution flow, connecting producers, the router, job handlers, and consumers.

implements
«trait»
FromJobCall
+type Rejection
+from_job_call(JobCall, Ctx) : Result<Self, Rejection>
«trait»
FromJobCallParts
+type Rejection
+from_job_call_parts(Parts, Ctx) : Result<Self, Rejection>
Context
+S 0
+from_job_call_parts(Parts, Ctx) : Result<Self, Infallible>
Metadata
+MetadataMap 0
+from_job_call_parts(Parts, Ctx) : Result<Self, Infallible>

Extractors

Extractors allow job handlers to:

  1. Extract specific data from job calls.
  2. Access shared context needed for processing.
  3. Handle metadata attached to job calls.

Job Results and Conversion

The job system allows returning various types from jobs, which are converted to JobResult via the IntoJobResult trait.

Final Result
Conversion Process
Return Types
String
Bytes/BytesMut
Void
()
Result<T, E>
Custom Type
IntoJobResult Trait
JobResult
None (No Result)

BlueprintRunner and Its Configuration

BlueprintRunner and Its Configuration

The BlueprintRunner uses a builder pattern to configure all necessary components for job execution:

  1. Blueprint Configuration - Protocol-specific registration and requirements.
  2. Producers - Generate job calls from events (e.g., blockchain events).
  3. Router - Directs job calls to appropriate handlers.
  4. Consumers - Receive and process job results.
  5. Background Services - Additional services that need to run alongside jobs.

BlueprintRunnerBuilder

+builder(config, env) : BlueprintRunnerBuilder
BlueprintRunnerBuilder
-config: DynBlueprintConfig
-env: BlueprintEnvironment
-producers: Vec<Producer>
-consumers: Vec<Consumer>
-router: Option<Router>
-background_services: Vec<DynBackgroundService>
-shutdown_handler: F

+router(Router) : Self
+producer(Stream) : Self
+consumer(Sink) : Self
+background_service(BackgroundService) : Self
+with_shutdown_handler(F) : Self
+run() Result

FinalizedBlueprintRunner

FinalizedBlueprintRunner
-config: DynBlueprintConfig
-producers: Vec<Producer>
-consumers: Vec<Consumer>
-router: Router
-env: BlueprintEnvironment
-background_services: Vec<DynBackgroundService>
-shutdown_handler: F
-run() Result

Job Trait Implementation

The Job trait is the core abstraction for implementing job handlers. It is automatically implemented for async functions that take appropriate parameters and return types convertible to JobResult.

Error Handling in Job Execution

Error Handling in Job Execution

Error Types

  • RunnerError

    • NoRouter
    • NoProducers
    • Keystore (KeystoreError)
    • Networking (NetworkingError)
    • Io (IoError)
    • Config (ConfigError)
    • BackgroundService (String)
    • JobCall (JobCallError)
    • Producer (ProducerError)
    • Consumer (BoxError)
    • Tangle (TangleError)
    • Eigenlayer (EigenlayerError)
    • Other (BoxError)
  • JobCallError

    • JobFailed (BoxError)
    • JobDidntFinish (JoinError)
  • ProducerError

    • StreamEnded
    • Failed (BoxError)
Sources: [crates/runner/src/error.rs]

Best Practices for Job System Implementation

Best Practices

When using the job system, follow these guidelines:

  1. JobId Selection:
    • Use simple numeric IDs for basic applications.
    • Use string-based IDs or domain-specific types for more complex applications.
    • Consider using a hash-based ID for protocol-specific identifiers.
  2. Return Types:
    • Return Result<T, E> to properly handle errors.
    • Use Void when a job should not produce a result.
    • Return structured data for complex responses.
  3. Context Usage:
    • Use Context to share state between jobs.
    • Organize context into smaller components using FromRef.
    • Consider thread-safety with shared mutable context.
  4. Error Handling:
    • Implement custom error types when needed.
    • Use the ? operator with Result returns for clean error propagation.
    • Provide meaningful error messages for debugging.

Troubleshooting and Debugging

  • Ask Devin about: tangle-network/blueprint
  • Deep Research:
  • Syntax error: textmermaid version 11.6.0

Conclusion and Further Resources

No relevant content available.

Documentation

Introduction to the Tangle Network's Blueprint Framework

To learn more about the Tangle Network's Blueprint Framework, consult with Devin regarding the specifics of the tangle-network/blueprint.

Overview of the Router Component

Router Component Overview

The Router component is a central part of the Blueprint framework that handles dispatching jobs to the appropriate handlers based on job IDs. It serves as the routing mechanism between job calls and their implementations, allowing for flexible composition of jobs and services with middleware support.

Key Features:

  • Job Dispatching: Routes jobs to the correct handlers.
  • Middleware Support: Allows for the integration of middleware in job processing.
  • Flexible Composition: Facilitates the combination of jobs and services.

Relevant Source Files:

For more detailed information about the job system, refer to the Job System.

Router Architecture and Core Components

Purpose and Functionality

The Router takes incoming JobCall objects, matches them against registered handlers based on their job ID, and returns the results. It supports three types of routing mechanisms:

  1. Specific routes: Matched to exact job IDs
  2. Always routes: Executed for every job call regardless of ID
  3. Fallback routes: Used when no specific route matches

Router Architecture

The Router is built around an internal architecture that efficiently maps job IDs to their handlers.

Router System
route()
always()
fallback()
match job_id
execute
JobResult
Option<Vec<JobResult>>
JobCall
Router<Ctx>
RouterInner<Ctx>
JobIdRouter<Ctx>
Specific Routes Map
Always Routes
Fallback Route

The core components are:

  • Router<Ctx>: The main entry point that wraps RouterInner and provides the public API
  • RouterInner: Contains the JobIdRouter and manages routing state
  • JobIdRouter: Performs the actual routing based on job IDs
  • Route: Represents a job handler that can process job calls and return results

Job Execution Flow in the Routing System

Job Execution Flow

When a job call is received, the Router processes it through a specific sequence to determine which handlers should execute.

"JobIdRouter" "Router" "Client"
alt[Fallback route exists][No fallback route]
alt[Job ID matches a specific route][No specific route matched]
JobCall call_with_context
Execute specific route
Execute always routes
All job results
Check for fallback route
Execute fallback route
Fallback job result
None (no routes matched)
Option<Vec<JobResult>>

Key Implementation Details:

  • Specific routes are matched exactly by job ID.
  • Always routes are executed regardless of specific route matches.
  • The fallback route only runs when no specific route matches.
  • Results from all executed routes are collected and returned.

Integration with Tower Services

The Router integrates with the tower middleware ecosystem by implementing the Service trait, allowing it to be composed with other services and middleware.

Service Integration
implements
route()
route_service()
layer()
call()
Option<>
Router
Service trait
Job Handler
Other Service
Middleware
JobCall
Client

Router API and Usage

API and Usage

The Router provides a builder-style API that allows for fluent configuration:

Creating a Router

let router = Router::new();

Adding Routes

// Add a route for a specific job ID
// Add a handler that runs for every job call
// Add a handler that runs when no specific route matches

Middleware Support

The Router can be enhanced with middleware layers that wrap around routes:

router = router.layer(middleware_layer);

Context Support

Routes can access shared context data:

router = router.with_context(context);

Job Routing Match Logic

The routing mechanism follows a specific pattern to determine which handlers to execute:


## Integrating Middleware with the Router

### Integration Benefits
- **Using the Router with existing tower middleware**
- **Applying common patterns:**
  - Rate limiting
  - Timeouts
  - Retries
- **Composing the Router with other services**

### Sources
- [Routing Implementation](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/router/src/routing.rs#L285-L323)
- [Future Implementation](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/router/src/future.rs#L77-L94)

## Practical Examples of Router Usage

## Example Usage
Here's a conceptual example of how the Router might be used in a Blueprint application:
```rust
// Create a new router
let router = Router::new()
// Add a route for calculating squares
.route(CALCULATE_SQUARE_JOB_ID, |call: JobCall| async move {
    // Parse the input number
    let number: u64 = /* extract from call */;
    // Return the square
})
// Add a route for calculating cubes
.route(CALCULATE_CUBE_JOB_ID, |call: JobCall| async move {
    // Parse the input number
    let number: u64 = /* extract from call */;
    // Return the cube
})
// Add a logging handler that runs for every job
.always(|call: JobCall| async move {
    println!("Processing job: {}", call.job_id());
    // Return nothing (None) as this is just for logging
})
// Add a fallback for unknown job IDs
.fallback(|call: JobCall| async move {
    println!("Unknown job ID: {}", call.job_id());
    // Return an error message
    format!("Unknown job ID: {}", call.job_id())
});

Conclusion and Further Considerations

Conclusion

The Router is a versatile component of the Blueprint framework that enables flexible job routing and execution. It supports multiple routing strategies and middleware integration, providing a powerful foundation for building complex job processing systems while maintaining clean separation of concerns. The Router integrates closely with the Blueprint Runner to provide the execution environment for jobs, and with the wider Blueprint framework through the Job trait and Service implementations.

Sources: Routing Code Future Code

Documentation

Introduction to the Tangle Network's Blueprint Framework

No significant content available.

Overview of the Keystore Component

Keystore

The Keystore component of the Blueprint framework provides a secure and flexible system for managing cryptographic keys and signing operations. It handles key generation, storage, retrieval, and signing capabilities across various key types with support for multiple storage backends.

Purpose and Architecture

The Blueprint Keystore provides a unified interface for cryptographic key management operations while supporting pluggable storage backends. It's designed to securely store private keys while making them available for signing operations when needed.

Storage Backends

  • InMemoryStorage
  • FileStorage
  • SubstrateStorage

Key Types and Cryptography

  • ECDSA (K256)
  • Ed25519
  • SR25519
  • BLS (377/381)
  • BN254

Operations

  • generate()
  • sign_with_local()
  • list_local()
  • insert()
  • remove()

Relevant Source Files

Core Components of the Keystore

Key Types and Cryptography

The Keystore supports various cryptographic algorithms through a unified KeyType interface.

Key Type Algorithm Feature Flag Common Use
SR25519 Schnorrkel sr25519-schnorrkel or tangle Substrate account keys
ED25519 Ed25519 (Zebra) zebra or tangle General purpose signatures
ECDSA secp256k1 ecdsa or tangle EVM compatibility
BLS381 BLS on BLS12-381 bls or tangle Aggregate signatures
BLS377 BLS on BLS12-377 bls or tangle Aggregate signatures
BN254 BN254 curve bn254 Zero-knowledge proofs

The KeyType trait abstracts over these different cryptographic implementations, providing common operations like signing, verification, and key generation.

Implementation Details

Storage Priority

When multiple storage backends are configured, the Keystore tries them in priority order until it finds the key.

// Store keys with higher priority (255) for more important backends
let entry = LocalStorageEntry { storage, priority: 255 };
backends.push(entry);
backends.sort_by_key(|e| cmp::Reverse(e.priority));

Feature Flags

The Keystore uses Rust's feature flags to enable only what's needed:

  • std: Enables standard library features like file system storage
  • substrate-keystore: Enables Substrate keystore integration
  • Key type features: sr25519-schnorrkel, ecdsa, zebra, bls, bn254, sp-core
  • Remote signer features: aws-signer, gcp-signer, ledger-browser, ledger-node

Security Considerations

The Keystore is designed with security in mind, including:

  1. Key Isolation: Private keys never leave the storage backend unless explicitly exported.
  2. Feature-Based Security: Remote signing options can be used to keep keys in hardware devices or cloud HSMs.
  3. Multiple Backend Support: Critical keys can be stored in more secure backends while less sensitive keys use more convenient storage.

Storage Backends: InMemory, File, and Substrate

Keystore Overview

The Keystore provides a consistent interface for key storage mechanisms.

Storage Backends

The Keystore supports multiple storage backends through a unified interface. Each backend implements the RawStorage trait.

RawStorage Trait

  • TypedStorage Wrapper
  • Storage Implementations
    • InMemoryStorage
    • FileStorage
    • SubstrateStorage

Type-Safe Operations

  • store()
  • load()
  • remove()
  • contains()
  • list()

Storage Backend Types

Backend Purpose Feature Flag Persistence
InMemoryStorage Fast in-memory storage Always available None (volatile)
FileStorage File-based persistent storage std File system
SubstrateStorage Integration with Substrate keystore substrate-keystore Depends on Substrate config

Configuration

The Keystore is configured through the KeystoreConfig struct:

KeystoreConfig {
    in_memory(bool),
    fs_root(path),
    substrate(keystore),
    remote(config),
}

Important Configuration Options

  • in_memory(true): Enable in-memory storage (default if no other storage is specified)
  • fs_root(path): Enable file-based storage with the specified root directory
  • substrate(keystore): Enable Substrate keystore integration
  • remote(config): Enable remote keystores like AWS KMS, GCP KMS, or Ledger hardware wallets

Example Initialization

Keystore::new(config);

Key Management Operations

Key Management Operations

Commands

  • Generate a new SR25519 key

    cargo tangle key generate --key-type sr25519 --output ./my-key.json
  • Import an existing ECDSA key

    cargo tangle key import --key-type ecdsa --secret "0x1234..." --keystore-path ./keystore
  • List all keys in the keystore

    cargo tangle key list --keystore-path ./keystore
  • Export a private key

    cargo tangle key export --key-type sr25519 --public "0xabcd..." --keystore-path ./keystore
  • Generate a mnemonic phrase

    cargo tangle key generate-mnemonic --word-count 24

Usage Examples

Creating a Keystore

// In-memory keystore (default)
let keystore = Keystore::new(KeystoreConfig::new())?;

// File-based keystore
let keystore = Keystore::new(KeystoreConfig::new().fs_root("./keystore"))?;

// Substrate keystore
let substrate_keystore = sc_keystore::LocalKeystore::open(path, None)?;
let keystore = Keystore::new(KeystoreConfig::new().substrate(Arc::new(substrate_keystore)))?;

Key Operations

// Generate a new key
let public_key = keystore.generate::<SpSr25519>(None)?;

// Generate with a deterministic seed
let seed = b"my deterministic seed";
let public_key = keystore.generate::<K256Ecdsa>(Some(seed))?;

// Sign a message
let msg = b"message to sign";
let signature = keystore.sign_with_local::<SpSr25519>(&public_key, msg)?;

// List all keys of a specific type
let keys = keystore.list_local::<SpSr25519>()?;

// Check if a key exists
let exists = keystore.contains_local::<SpSr25519>(&public_key)?;

// Remove a key

CLI Integration for Key Management

CLI Integration

The Blueprint framework provides CLI commands for key management through the cargo-tangle command. These commands allow users to generate, import, export, and list keys.

CLI Examples

cargo tangle key generate --key-type sr25519
cargo tangle key import --key-type ecdsa --keystore-path ./keystore
cargo tangle key list --keystore-path ./keystore
cargo tangle key export --key-type sr25519 --public '0x...'

Key Commands

  • generate
  • import
  • export
  • list
  • generate-mnemonic

Keystore

Error Handling in the Keystore

Error Handling

The Keystore provides a comprehensive error system through the Error enum, which covers:

  • Storage-related errors: StorageNotSupported, Io
  • Key operations: KeyNotFound, KeyTypeNotSupported
  • Cryptographic failures: SignatureFailed, Crypto
  • Backend-specific errors: SpKeystoreError, AwsSigner, etc.

All Keystore operations return a Result<T, Error> allowing for proper error handling and propagation.

match keystore.sign_with_local::<SpSr25519>(&public_key, msg) {
    Ok(signature) => {
        // Use the signature
    }
    Err(Error::KeyNotFound) => {
        // Handle missing key
    }
    Err(Error::KeyTypeNotSupported) => {
        // Handle unsupported key type
    }
    Err(e) => {
        // Handle other errors
        println!("Error signing message: {}", e);
    }
}

Best Practices for Keystore Usage

When using the Keystore in production, consider:

  • Using hardware-backed keystores where possible
  • Setting appropriate filesystem permissions for file-based keystores
  • Using secure memory handling techniques when working with private keys in memory

Sources: 216-226 69-102

Documentation

Introduction to Tangle Network's Blueprint Framework

No significant content available.

Getting Started with Installation

Getting Started with Installation

Understanding the Architecture of the Blueprint Manager

Blueprint Manager

The Blueprint Manager is a critical component of the Tangle Blueprint framework that handles the lifecycle of decentralized applications (dApps) running across blockchain networks. It monitors blockchain events, dynamically fetches application binaries from various sources, and manages their execution based on on-chain registrations and instructions.

Architecture Overview

The Blueprint Manager acts as an orchestration layer, connecting on-chain events to off-chain service execution. It listens for events from the Tangle Network, fetches the appropriate blueprint binaries based on on-chain specifications, and manages their execution lifecycle.

External Components
Blueprint Manager Architecture
Triggers
Executes
Blueprint Manager
Event Handler
Manager Config
Active Gadgets
Source Handler
GitHub Source
Container Source
Test Source
Tangle Client
Verified Blueprint
Process Handles
Tangle Network
Blueprint Events
Blueprint Binaries
Blueprint Runner

Relevant Source Files

Core Components of the Blueprint Manager

The VerifiedBlueprint struct encapsulates a blueprint that has been validated against on-chain data and is ready for execution. It contains:

  • A list of potential source fetchers for the blueprint
  • The blueprint's metadata and configuration
  • Methods to start services for the blueprint

Sources: 24-142 18-50

Event-Driven Architecture Explained

Event-Driven Execution Flow

The Blueprint Manager operates on an event-driven model, responding to events from the Tangle blockchain to manage blueprint services.

"Blueprint Runner" "Source Handler" "Blueprint Manager" "Tangle Network"
alt[New Blueprint Registration][Unregistered Service][Service Status Changed]
Finality Notification
Poll for Blueprint Events
Query Blueprint Details
Return Blueprint Data
Fetch Blueprint Binary
Binary Ready
Spawn New Service
Stop Service
Restart/Update Service
Update Active Gadgets

Event Types Processed

  1. PreRegistration: New blueprint registration that needs initialization.
  2. Registered: Confirms a blueprint has been registered.
  3. Unregistered: Indicates a blueprint has been unregistered and should be stopped.
  4. ServiceInitiated: Indicates a service has been started on-chain.
  5. JobCalled: Indicates a job has been called on a service.
  6. JobResultSubmitted: Indicates a job result has been submitted.

Blueprint Sources

The Blueprint Manager supports fetching blueprints from multiple sources, enabling flexibility in deployment and development workflows.

Blueprint Source System Overview

Blueprint Source System Overview

Key Components:

  • BlueprintSourceHandler Interface
  • Fetchers:
    • GitHub Source Fetcher
    • Container Source Fetcher
    • Test Source Fetcher

Methods:

  • fetch()
  • spawn()

Processes:

  • GitHub Release Binary
  • Container Image
  • Local Cargo Build
  • GitHub Binary Process
  • Docker Container
  • Test Binary Process
  • ProcessHandle
// Example of fetch method usage
fetch();

// Example of spawn method usage
spawn();

Source Handlers: GitHub, Container, and Test

Source Handlers

  1. GitHub Source (1-126):

    • Downloads binary releases from GitHub repositories
    • Verifies binary checksums for security
    • Manages binary execution as native processes
  2. Container Source (1-249):

    • Pulls container images from Docker/Podman registries
    • Manages container lifecycle via Docker API
    • Handles networking adaptations for containerized blueprints
  3. Test Source (1-146):

    • Builds binaries from local source code
    • Used primarily for testing and development
    • Compiles using cargo within the repository

Managing Blueprints: The BlueprintSourceHandler Trait

Each source handler implements the BlueprintSourceHandler trait, which provides a common interface for fetching and spawning blueprints.

Blueprint Verification and Execution

When a blueprint event is detected, the Blueprint Manager verifies the blueprint and prepares it for execution:

Blueprint Verification and Execution Flow
Running
Error
Finished
On-chain Unregistration
Process Death
Event Detection
Verify Blueprint
Select Source Fetcher
Fetch Binary/Container
Prepare Arguments & Environment
Spawn Process/Container
Track Process Status
Monitor Service
Handle Error
Clean Up Resources
Stop Service
Restart Service

Configuration Options for the Blueprint Manager

Configuration

The Blueprint Manager is configured via the BlueprintManagerConfig struct, which provides options for:

Configuration Description Default
keystore_uri Path to the keystore for blockchain identity Required
data_dir Directory for blueprint data storage ./data
verbose Verbosity level for logging 0
pretty Enable pretty logging format false
instance_id Unique identifier for the manager instance None
test_mode Enable test mode for development false
preferred_source Preferred source type (Container, Native, Wasm) Native
podman_host URL for the Podman/Docker socket unix:///var/run/docker.sock

The manager also detects available source types on the system through the SourceCandidates struct, which checks for:

  • Container runtime availability (Docker/Podman)
  • Wasm runtime availability
  • System capabilities for native binary execution

Integrating the Blueprint Manager via SDK

Usage via SDK

The Blueprint Manager can be integrated into applications using its SDK interface:

"Tangle Runtime" "Blueprint Manager" "Blueprint SDK" "Application"
opt[Shutdown]
    Configure BlueprintManagerConfig
    Configure BlueprintEnvironment
    run_blueprint_manager()
    Create BlueprintManagerHandle
    start()
    Connect & Subscribe to Events
    Blockchain Events
    Process Events & Manage Blueprints
    await handle
shutdown()
Terminate All Blueprints

The BlueprintManagerHandle provides methods to:

  • Start the manager with start()
  • Access keypair information with sr25519_id() and ecdsa_id()
  • Shut down the manager with shutdown()
  • Wait for completion by awaiting the handle

CLI Integration with cargo-tangle

CLI Integration

The Blueprint Manager is available as a CLI tool, used by the cargo-tangle command:

cargo-tangle run blueprint

Configuration Options

  • Specifying RPC endpoints
  • Configuring keystore paths
  • Setting data directories
  • Selecting blueprint IDs

Key Functions

  • start()
  • await
  • run_blueprint_manager()
  • BlueprintManagerHandle

Sources

Managing Lifecycle with BlueprintManagerHandle

Blueprint Manager Handle

The BlueprintManagerHandle is a critical abstraction that provides control over the running Blueprint Manager:

implements
BlueprintManagerHandle
-span: tracing::Span
-sr25519_id: TanglePairSigner<sr25519::Pair>
-ecdsa_id: TanglePairSigner<ecdsa::Pair>
-keystore_uri: String
-shutdown_call: Option<oneshot::Sender>() : <>
-start_tx: Option<oneshot::Sender>() : <>
-running_task: JoinHandle<Result>() : <>
+start() Result~() : ~
+sr25519_id() : &TanglePairSigner<sr25519::Pair>
+ecdsa_id() : &TanglePairSigner<ecdsa::Pair>
+shutdown() Result~() : ~
+keystore_uri() : &str
+span() : &tracing::Span
«trait»
Future
+poll() Poll<Result>() : <>

The handle implements the Future trait, allowing it to be awaited in async contexts. When dropped, it automatically starts the Blueprint Manager if it hasn't been started yet, ensuring that resources are properly initialized.

Conclusion and Further Research

Conclusion

The Blueprint Manager is a central component in the Tangle Blueprint framework that bridges on-chain events to off-chain execution. It provides a flexible system for fetching, verifying, and executing blueprints from various sources, enabling dynamic deployment and management of decentralized applications. Its event-driven architecture allows it to respond to blockchain events in real-time, ensuring that blueprint services are properly synchronized with their on-chain representations. Through its configuration options and source handler system, it supports diverse deployment scenarios, from development and testing to production environments.

Documentation

Introduction to the Tangle Network

No relevant content available.

Getting Started with the Blueprint Framework

To learn more about the Blueprint Framework, consider reaching out to Devin regarding the tangle-network/blueprint for deeper insights.

Architecture Overview

Architecture Overview

The Blueprint networking system consists of several interconnected components that work together to provide a secure and flexible peer-to-peer communication layer.

  • Peer-to-Peer Communication: Built on libp2p, enabling secure connections and peer verification.
  • Message Exchange: Supports both direct communication and gossip protocols.

For more details on networking extensions, refer to Advanced Topics - Networking Extensions.

Key Links

Code Snippet

// Example of establishing a peer connection
const peer = await libp2p.peerDiscovery.findPeer(peerId);
await libp2p.dial(peer);

Core Networking Components

Service Handle

Applications interact with the network through a NetworkServiceHandle, which provides a simplified interface to the underlying network service:

NetworkServiceHandle<K: KeyType>
+PeerId local_peer_id
+Arc<String> blueprint_protocol_name
+K::Secret local_signing_key
+NetworkSender<K> sender
+NetworkReceiver receiver
+Arc<PeerManager<K>> peer_manager
+Option<VerificationIdentifierKey<K>> local_verification_key
+send(routing: MessageRouting, message: Vec<u8>)
+peers() : -> Vec<PeerId>
+next_protocol_message() : -> Option<ProtocolMessage>
+get_participant_id() : -> Option<usize>
+split() -> (NetworkSender<K>, NetworkReceiver)

The handle provides methods for:

  • Sending messages (point-to-point or gossip)
  • Querying connected peers
  • Receiving incoming messages
  • Retrieving participant IDs for consensus protocols

Configuring Network Settings

Configuration Parameters

Parameter Description
network_name Name/namespace for the network
instance_id Unique identifier for this blueprint instance
instance_key_pair Secret key for signing protocol messages
local_key libp2p keypair for peer identification
listen_addr Network address to listen on
target_peer_count Target number of peers to maintain
bootstrap_peers Initial peers to connect to
enable_mdns Enable multicast DNS discovery
enable_kademlia Enable Kademlia DHT for peer discovery
using_evm_address_for_handshake_verification Whether to use EVM addresses for peer verification

Sources: 202-224 250-335

Understanding NetworkService and NetworkConfig

Network Service

The NetworkService is the core component of the networking system. It initializes and manages the libp2p swarm and coordinates all network activities.

Service Configuration

The NetworkService is configured through the NetworkConfig struct:

NetworkConfig<K: KeyType>
+String network_name
+String instance_id
+K::Secret instance_key_pair
+Keypair local_key
+Multiaddr listen_addr
+u32 target_peer_count
+Vec<Multiaddr> bootstrap_peers
+bool enable_mdns
+bool enable_kademlia
+bool using_evm_address_for_handshake_verification

NetworkService<K: KeyType>
+Swarm<GadgetBehaviour<K>> swarm
+K::Secret local_signing_key
+Arc<PeerManager<K>> peer_manager
+new(config: NetworkConfig<K>, allowed_keys: AllowedKeys<K>, allowed_keys_rx: Receiver<AllowedKeys<K>>) : -> Result<Self, Error>
+start() : -> NetworkServiceHandle<K>
+run()

Sources: 26-244 15-223

Peer Management and Verification

Peer Management

The PeerManager is responsible for tracking peer states, managing verification, and controlling which peers are allowed to connect.

Peer Verification Flow

"Node B" "Node A"
Verify signature against whitelist
Both peers are now verified
PeerManager.is_peer_verified() = true
Connection Establishment
Handshake Request (with signed challenge)
Handshake Response
Protocol Messages (now allowed)

Allowed Keys and Whitelisting

The PeerManager maintains a whitelist of allowed keys, which can be either:

  1. EVM Addresses: For Ethereum-compatible verification
  2. Instance Public Keys: For blockchain-specific key types
«enumeration»
AllowedKeys<K: KeyType>
EvmAddresses(HashSet<Address>)
InstancePublicKeys(HashSet<K::Public>)
«enumeration»
VerificationIdentifierKey<K: KeyType>
EvmAddress(Address)
InstancePublicKey(K::Public)
+verify(msg: &[u8], signature: &[u8]) -> Result<bool, Error>
+to_bytes() -> Vec<u8>
PeerManager<K: KeyType>
+DashMap<PeerId, PeerInfo> peers
+DashSet<PeerId> verified_peers
+DashMap<VerificationIdentifierKey> verification_id_keys_to_peer_ids
+DashMap<PeerId> banned_peers
+Arc<RwLock<Vec<VerificationIdentifierKey<K>>>> whitelisted_keys
+clear_whitelisted_keys()
+insert_whitelisted_keys(keys: AllowedKeys<K>)
+is_key_whitelisted(key: &VerificationIdentifierKey<K>) -> bool
+verify_peer(peer_id: &PeerId)
+is_peer_verified(peer_id: &PeerId) -> bool
+ban_peer(peer_id: PeerId, reason: String, duration: Option<Duration>)

Communication Protocols

Communication Protocols

The networking layer supports two primary communication methods:

  1. Direct Request/Response: For targeted messages to specific peers.
  2. Gossip: For broadcast messages to all peers subscribed to a topic.

Message Flow

Gossip Communication
Publish
Message
Message
Message
Publisher Node
Blueprint Topic
Subscriber 1
Subscriber 2
Subscriber N

Direct Communication
Request
Response
Node A
Node B

Networking Extensions

Networking

Relevant Source Files

Message Types

The Blueprint protocol defines several message types:

Message Type Purpose
InstanceMessageRequest Direct request to a peer
InstanceMessageResponse Response to a direct request
HandshakeMessage Used for peer verification
ProtocolMessage Generic protocol message with routing info

Extensions

Aggregated Signature Gossip Extension

The aggregated signature gossip extension (blueprint-networking-agg-sig-gossip-extension) optimizes consensus by aggregating signatures to reduce network traffic:

blueprint-networking
blueprint-networking-agg-sig-gossip-extension
blueprint-core
blueprint-crypto
Signature Aggregator
Aggregated Gossip Handler
Signature Verification
Signature Combining
Topic Management
Message Propagation

Round-Based Extension

The round-based extension (blueprint-networking-round-based-extension) provides support for round-based protocols, such as distributed key generation or Byzantine agreement:

blueprint-networking
blueprint-networking-round-based-extension
blueprint-core
blueprint-crypto
round-based
Round-Based Network Adapter
MPC Protocol Integration
Round Message Routing
Message Delivery Guarantees
MPC Party Handling
Protocol State Management

Testing Utilities and Best Practices

Testing Utilities

The Blueprint SDK includes comprehensive testing utilities for the networking components, allowing developers to simulate network conditions and test protocols in isolation:

TestNode<K: KeyType>
+Option<NetworkService<K>> service
+PeerId peer_id
+Option<Multiaddr> listen_addr
+K::Secret instance_key_pair
+Keypair local_key
+bool using_evm_address_for_handshake_verification
+new(network_name: &str, instance_id: &str, allowed_keys: AllowedKeys<K>, bootstrap_peers: Vec<Multiaddr>, using_evm_address_for_handshake_verification: bool) : -> Self
+start() : -> Result<NetworkServiceHandle>

Utilities

  • create_whitelisted_nodes<K: KeyType>(count: usize, network_name: &str, instance_name: &str, using_evm_address_for_handshake_verification: bool) : -> Vec<TestNode<K>>
  • wait_for_peer_discovery<K: KeyType>(handles: &[NetworkServiceHandle<K>], timeout: Duration) -> Result
  • wait_for_all_handshakes<K: KeyType>(handles: &[&mut NetworkServiceHandle<K>], timeout_length: Duration)

These utilities make it easy to:

  • Create test networks with multiple nodes
  • Simulate peer discovery and handshakes
  • Test protocol behavior under various conditions
  • Verify consensus properties

Usage Example

Here's a basic example of how to set up the networking layer in a Blueprint application:

  1. Create a network configuration
  2. Initialize the NetworkService with appropriate key types
  3. Start the service to get a handle
  4. Use the handle to send and receive messages

Future Directions and Enhancements

Applications interact with the networking layer through the Job system and Router components of the Blueprint SDK.

Future Directions

The networking layer is designed to be extensible, allowing for:

  • Additional protocol integrations
  • Performance optimizations for large-scale networks
  • Enhanced security features
  • More sophisticated peer discovery mechanisms

Documentation

Introduction to Tangle Network's Blueprint Framework

Tangle Network's Blueprint Framework Overview

Additional Resources

Job Execution Lifecycle

Introduction to Job Execution Lifecycle

Overview of Job Execution

The Blueprint Runner orchestrates the job execution flow, connecting producers, the router, and consumers.

Job Execution Lifecycle

The execution of a job involves several steps:

  1. Job Call Creation: Producers create JobCall instances with a specific JobId and a body containing the necessary data for job execution.
ConsumerJobHandlerRouterBlueprintRunnerProducer
Create JobCall
Route JobCall
Execute Job
Return JobResult
Process Result
Broadcast JobResult

Key Components

  • JobId: A unique identifier used by the Router to determine the appropriate handler.
  • Metadata: Additional information for handlers or middleware.
  • Extensions: Arbitrary data attached to a job call.

For further details, refer to:

Key Components of Job Execution

Job Execution Flow

Relevant source files:

Job Processing and Management

Job Execution Flow in Blueprint

  • Integration with Blockchain Protocols: Blueprint allows operators to register before participating in job execution, facilitating compatibility with various blockchain protocols.
  • Asynchronous Task Processing: The job execution flow is designed for processing tasks across different blockchain environments, enabling effective job design, implementation, and debugging.

Code References

Broadcasting Job Results

4. Result Processing

After a job is executed, its return value is converted to a JobResult using the IntoJobResult trait. This standardizes how job results are represented, regardless of the job's return type.

contains
JobResult
+Parts head
+T body
+metadata() : MetadataMap
+body() : Result<T, E>
+into_parts() Result

5. Result Broadcasting

The BlueprintRunner broadcasts the job results to all registered consumers, completing the job execution cycle.

Using the Builder Pattern for Configuration

The builder pattern allows for flexible configuration of the runner with producers, consumers, background services, and a router.

Job Execution Components

Producer

A producer is a stream that generates job calls. It can derive job calls from blockchain events, timers, or any other source.

type Producer = Arc<Mutex<Box<dyn Stream<Item = Result<JobCall, BoxError>> + Send + Unpin + 'static>>>;

Core Components of Job Processing

Consumer

A consumer is a sink that receives job results. It can store results, relay them to other systems, or perform actions based on them.

type Consumer = Arc<Mutex<Box<dyn Sink<JobResult, Error = BoxError> + Send + Unpin + 'static>>>;

Router

The router matches job calls to the appropriate handlers based on the job ID and can apply middleware to all jobs or specific routes.

Job Handler

A job handler is an async function that processes a job call. It can extract arguments from the call, perform business logic, and return a result.

  • Job Trait Implementation
  • JobCall
  • Job::call()
  • ExtractArguments()
  • ExecuteJobFunction()
  • ConvertToJobResult()

JobCall Structure and Processing

Job Call Structure

A JobCall consists of two main parts: the header (Parts) and the body.

JobCall
+ Parts head
+ T body
+ job_id() : JobId
+ metadata() : MetadataMap
+ extensions() : Extensions
+ into_parts() (Parts, T)

Parts
+ JobId job_id
+ MetadataMap metadata
+ Extensions extensions

Execution Flow

  1. Creates JobCall
  2. Routes JobCall
  3. Executes Job
  4. Returns JobResult
  5. Processes Result
  6. Broadcasts JobResult

Components

  • Producer
  • Blueprint Runner
  • Router
  • Job Handler
  • Consumer

Sources

Routing Job Calls

2. Routing Job Calls

The BlueprintRunner receives job calls from producers and passes them to the Router, which determines the appropriate handler based on the JobId.

Router Mechanism
Route Match
No Match
Always Route
JobCall (with JobId)
Router
Job Handler
Fallback Handler
Always Handler

3. Job Execution

When a job is executed, the framework first extracts arguments from the job call using the FromJobCall and FromJobCallParts traits. These extractors can access parts of the job call or the entire call.

JobCall
Split into Parts + Body
Extract Arguments via FromJobCall/FromJobCallParts
Execute Job Function
Convert Return Value via IntoJobResult
JobResult

Error Handling in Job Execution

Error Handling

Error handling is an integral part of the job execution flow. If a job returns an error, it is wrapped in a JobResult::Err and propagated to consumers.

Job Return Value
JobResult::Ok
JobResult::Err

Background Services

In addition to job execution, the BlueprintRunner can also manage background services that run alongside job processing.

Background Services Management
start()
oneshot::Receiver
Trigger Shutdown
Continue Running

Lifecycle Management of BlueprintRunner

Runner Implementation Detail

The FinalizedBlueprintRunner is the core component that manages the job execution flow. It is created and configured through the BlueprintRunnerBuilder.

BlueprintRunnerBuilder
+Box<DynBlueprintConfig> config
+BlueprintEnvironment env
+Vec<Producer> producers
+Vec<Consumer> consumers
+Option<Router> router
+Vec<Box<DynBackgroundService>> background_services
+Future shutdown_handler
+router(Router) : Self
+producer(Stream) : Self
+consumer(Sink) : Self
+background_service(BackgroundService) : Self
+run() Result
FinalizedBlueprintRunner
+Box<DynBlueprintConfig> config
+Vec<Producer> producers
+Vec<Consumer> consumers
+Router router
+BlueprintEnvironment env
+Vec<Box<DynBackgroundService>> background_services
+Future shutdown_handler
+run() Result

Sources: 101-110 629-653

Background services are started when the runner is launched and are monitored throughout the runner's lifecycle. If a background service fails, the runner will trigger a shutdown.

Protocol-Specific Registration

Before job execution begins, the BlueprintRunner checks if protocol-specific registration is required and performs it if necessary.

Yes
No
should_exit_after_registration() == true
should_exit_after_registration() == false
Start Runner
Check requires_registration()
Register with Protocol
Skip Registration
Exit
Continue to Job Execution

Conclusion and Further Research

No relevant content available.

Documentation

Introduction to the Tangle Network's Blueprint Framework

Tangle Network's Blueprint Framework Overview

Getting Started

Core Components

Blueprint Runner

Blueprint Manager

CLI Reference

Development

Advanced Topics

Getting Started

No significant content to extract.

Configuring the Blueprint Runner

Runner Configuration

Overview

The Blueprint Runner orchestrates job execution, lifecycle management, and protocol-specific operations.

Required Components

  1. Router: Maps job IDs to handler functions. Required for operation.
  2. Producers: At least one producer is needed to supply job calls.

Optional Components

  1. Consumers: Handle outputs from processed jobs.
  2. Background Services: Perform auxiliary tasks.
  3. Shutdown Handler: Custom logic for shutdown.

Example Code for Configuring a Runner

let result = BlueprintRunner::builder(config, blueprint_env)
    // Required: Add a Router mapping job IDs to handlers
    .route(0, async || "Hello, world!")
    .route(1, handle_complex_job)
    // Required: Add at least one producer
    // Optional: Add result consumers
    // Optional: Add background services
    // Optional: Specify shutdown logic
    .with_shutdown_handler(async { println!("Shutting down!") })
    .run()
    .await;

Relevant Source Files

BlueprintEnvironment Configuration

BlueprintEnvironment Configuration

The BlueprintEnvironment struct contains fundamental configuration settings needed by all blueprints, including protocol-specific information, network endpoints, and key management.

Struct Definition

struct BlueprintEnvironment {
    http_rpc_endpoint: String,
    ws_rpc_endpoint: String,
    keystore_uri: String,
    data_dir: Option<PathBuf>,
    protocol_settings: ProtocolSettings,
    test_mode: bool,
}

enum ProtocolSettings {
    None,
    Tangle(TangleProtocolSettings),
    Eigenlayer(EigenlayerProtocolSettings),
}

Core Environment Settings

Setting Description Default
http_rpc_endpoint HTTP RPC endpoint for the blockchain Required
ws_rpc_endpoint WebSocket RPC endpoint for the blockchain Required
keystore_uri Path to the keystore directory Required
data_dir Directory for blueprint data ./data (optional)
test_mode Whether the blueprint is running in test mode false
protocol_settings Protocol-specific configuration Depends on protocol

Loading the Environment

// Loading from environment variables
let env = BlueprintEnvironment::load()?;

Tangle Protocol Settings

// Constructed manually
let env = BlueprintEnvironment {
    http_rpc_endpoint: "https://rpc.tangle.tools".to_string(),
    ws_rpc_endpoint: "wss://rpc.tangle.tools".to_string(),
    keystore_uri: "./keystore".to_string(),
    data_dir: Some(PathBuf::from("./data")),
    protocol_settings: ProtocolSettings::Tangle(TangleProtocolSettings {
        blueprint_id: 1,
        service_id: Some(2),
    }),
    test_mode: false,
};

// Tangle Protocol Settings
// Required settings for the Tangle protocol:
uses
TangleProtocolSettings
+u64 blueprint_id
+Option<u64> service_id
+protocol()
TangleConfig
+PriceTargets price_targets
+bool exit_after_register
+new(price_targets)
+with_exit_after_register(bool)

// Setting | Description | Required  
// ---|---|---  
// `blueprint_id` | The ID of the blueprint on Tangle Network | Yes  
// `service_id` | The service ID for this blueprint instance | No (for registration)

Eigenlayer Protocol Configuration

Eigenlayer Protocol Configuration

Configuration Types

  1. ECDSA-based configuration - Using EigenlayerECDSAConfig

    • Requires:
      • earnings_receiver_address
      • delegation_approver_address
    • Constructor:
      new(earnings_receiver, delegation_approver)
  2. BLS-based configuration - Using EigenlayerBLSConfig

    • Requires:
      • earnings_receiver_address
      • delegation_approver_address
      • exit_after_register
    • Constructor:
      new(earnings_receiver, delegation_approver)
      with_exit_after_register(bool)

Key Registration Settings

Setting Description Default
price_targets Resource pricing information All zeros
exit_after_register Whether to exit after registration true

Eigenlayer Registration Process

  • ECDSA Registration
    1. Check if the operator is registered with the stake registry.
    2. Register the operator with earnings receiver and delegation approver.
    3. Exit based on configuration.

Settings File Format

Example for Eigenlayer Protocol

ALLOCATION_MANAGER_ADDRESS=0x8a791620dd6260079bf849dc5567adc3f2fdc318
REGISTRY_COORDINATOR_ADDRESS=0xcd8a1c3ba11cf5ecfa6267617243239504a98d90
OPERATOR_STATE_RETRIEVER_ADDRESS=0xb0d4afd8879ed9f52b28595d31b441d079b2ca07
DELEGATION_MANAGER_ADDRESS=0xcf7ed3acca5a467e9e704c703e8d87f634fb0fc9
SERVICE_MANAGER_ADDRESS=0x36c02da8a0983159322a80ffe9f24b1acff8b570
STAKE_REGISTRY_ADDRESS=0x4c5859f0f772848b2d91f1d83e2fe57935348029
STRATEGY_MANAGER_ADDRESS=0xa513e6e4b8f2a923d98304ec87f64353c4d5c853
AVS_DIRECTORY_ADDRESS=0x5fc8d32690cc91d4c39d9d3abcbd16989f875707
REWARDS_COORDINATOR_ADDRESS=0xb7f8bc63bbcad18155201308c8f3540b07f84f5e
PERMISSION_CONTROLLER_ADDRESS=0x3aa5ebb10dc797cac828524e59a333d0a371443c
STRATEGY_ADDRESS=0x524f04724632eed237cba3c37272e018b3a7967e

Blueprint Registration Process

Registration Process

Blueprints often need to register with their respective protocol networks before they can operate. The registration process is controlled by the BlueprintConfig trait implementation:

Runner starts
requires_registration()?
register()
Skip registration
should_exit_after_registration()?
Exit runner
Continue execution

Tangle Registration

  1. Checks if the operator is already registered for the specified blueprint ID
  2. Registers the operator with the Tangle Network if needed
  3. Can optionally exit after registration

BLS Registration

  1. Checks if the operator is registered
  2. Registers the operator
  3. Deposits into the strategy
  4. Sets the allocation delay
  5. Stakes tokens to quorums
  6. Registers to operator sets
  7. Exits based on the exit_after_register setting (defaults to true)

CLI Configuration

The Blueprint CLI provides commands to automate the runner configuration process, particularly the run command:

cargo tangle blueprint run --protocol <protocol> --rpc-url <url> [OPTIONS]

CLI Reference

Key CLI Options

Option Description Default
--protocol, -p Protocol to use (tangle or eigenlayer) Required
--rpc-url, -u HTTP RPC endpoint URL http://127.0.0.1:9944
--keystore-path, -k Path to the keystore ./keystore
--data-dir, -d Data directory path ./data
--settings-file, -f Path to protocol settings file ./settings.env
--network, -w Network to connect to (local, testnet, mainnet) local
--bootnodes, -n Optional bootnodes to connect to None

The CLI will prompt for required information if not provided and load protocol-specific settings from environment variables or a .env file.

Sources: 159-198, 454-576

Sources: [737-821](https://github.com/tangle-network/blueprint/blob/af8278cb/cli/src/main.rs#L737-L821), [80-96](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/runner/src/eigenlayer/config.rs#L80-L96)
Auto-refresh not enabled yet

Error Handling in Runner Configurations

Error Handling

Runner configuration errors are categorized into several types:

Error Type Description
ConfigError Issues with configuration values
KeystoreError Problems with the keystore
NetworkingError Network connection issues
NoRouterError Missing router configuration
NoProducersError No job producers configured
Protocol-specific errors Errors specific to Tangle or Eigenlayer

Proper error handling in your configuration is essential for diagnosing issues:

match runner.run().await {
    Ok(_) => println!("Runner completed successfully"),
    Err(err) => match err {
        RunnerError::Config(config_err) => {
            eprintln!("Configuration error: {}", config_err);
        }
        RunnerError::NoRouter => {
            eprintln!("Runner missing router configuration");
        }
        // Handle other error types
        _ => eprintln!("Runner error: {}", err),
    }
}

Advanced Topics

  • Ask Devin about tangle-network/blueprint
  • Deep Research

Documentation

Introduction

No significant content available.

Blueprint Framework Overview

Blueprint Framework Overview

Additional Resources

Core Components

Key Components

  1. BlueprintRunner: The main facade that provides a builder pattern for constructing a runner.
  2. BlueprintRunnerBuilder: A builder that allows configuration of producers, consumers, router, and background services.
  3. FinalizedBlueprintRunner: The internal runner implementation that orchestrates the execution flow.
  4. BlueprintConfig: A trait for protocol-specific configuration and registration.
  5. BackgroundService: A trait for services that run in the background during blueprint execution.

Sources: 39-76 101-110 122-129 448-467

Blueprint Runner

Core Components

Blueprint Runner

The Blueprint Runner is a core component of the Tangle Blueprint framework responsible for orchestrating job execution, managing background services, and handling protocol-specific operations across various blockchain environments.

Core Architecture

The Blueprint Runner follows a producer-consumer architecture pattern with a central router for job dispatching and execution. It provides a flexible system configurable for different blockchain protocols.

Key Components

  • BlueprintRunnerBuilder
  • FinalizedBlueprintRunner
  • Router
  • BlueprintConfig
  • Producers
  • Consumers
  • Background Services
  • JobCall
  • JobResult

Protocol Support

  • Tangle Protocol
  • Eigenlayer Protocol
  • Other Protocols

Relevant Source Files

Additional Resources

For more information about the overall Blueprint architecture, see Architecture Overview. For details on creating blueprints, see Creating Your First Blueprint.

Configuration and Setup

Configuration and Setup for Blueprint Runner

To configure and set up the Blueprint Runner, use the following methods:

with config & env
.router(router)
.producer(producer)
.consumer(consumer)
.background_service(service)
.with_shutdown_handler(handler)
.run()

Builder Pattern

Utilize the BlueprintRunner::builder() to create a new instance:

BlueprintRunnerBuilder

Key Methods

  • Add Router: .router(router)
  • Add Producer: .producer(producer)
  • Add Consumer: .consumer(consumer)
  • Add Background Service: .background_service(service)
  • Set Shutdown Handler: .with_shutdown_handler(handler)
  • Run Blueprint: .run()

Repeat the .run() method as needed to execute the Blueprint.

Job Execution Mechanism

Job Routing and Execution

The core functionality of the Blueprint Runner revolves around executing jobs:

  1. Producers generate JobCall instances, each with a unique job ID.
  2. The Router routes these calls to the appropriate handler functions.
  3. Handlers process the jobs and return JobResult instances.
  4. Consumers process the job results.

Sources:

Error Handling Strategies

Error Handling Steps

When an error occurs, the runner:

  1. Runs the shutdown handler
  2. Terminates all background services
  3. Stops all producers and consumers
  4. Returns the error to the caller

Conclusion

The Blueprint Runner provides:

  1. A unified execution environment for different blockchain protocols
  2. A flexible producer-consumer architecture for job processing
  3. Protocol-specific configuration and registration capabilities
  4. Robust error handling and recovery

It serves as the orchestration layer for blockchain applications.

Usage Examples and Implementation Details

Implementation Details

Registration Process

The Blueprint Runner handles protocol-specific registration processes through the BlueprintConfig trait:

check
Yes
check
Yes
No
No
setup
create
until error or shutdown
run() method
requires_registration()?
register()
should_exit_after_registration()?
Return Ok(())
Continue execution
Start background services
Job execution loop
Execute shutdown handler

The runner orchestrates this flow while also managing background services.

Usage Example

Here's a basic example of how to configure and use the Blueprint Runner:

// Create a configuration (protocol-specific)
let config = TangleConfig::new(price_targets).with_exit_after_register(false);

// Create environment
let blueprint_env = BlueprintEnvironment::load()?;

// Create a router
let router = Router::new()
    .route(0, async || "Hello, world!") // Route job ID 0 to a simple handler
    .route(1, async |input: String| input.to_uppercase()); // Route job ID 1 to a handler that accepts a String

// Create and configure the runner
let result = BlueprintRunner::builder(config, blueprint_env)
    .router(router)
    .producer(some_producer)
    .consumer(some_consumer)
    .background_service(some_background_service)
    .with_shutdown_handler(async { println!("Shutting down!") })
    .run()
    .await;

// Handle any errors from the runner
if let Err(e) = result {
    eprintln!("Runner failed: {:?}", e);
}

Job Execution Flow

Job Execution Flow

The Blueprint Runner orchestrates the flow of jobs through the system using a producer-consumer pattern with a central router.

"Background Services" "Job Consumer" "Job Handler" "Router" "Blueprint Runner" "Job Producer" "Background Services" "Job Consumer" "Job Handler" "Router" "Blueprint Runner" "Job Producer"
Initialization Phase
Job Execution Phase
Termination Phase
alt[Error or Shutdown Signal]
start()
poll for jobs
JobCall
route(JobCall)
handle(JobCall)
JobResult
send(JobResult)
execute shutdown_handler
shutdown
terminate

Sources: 329-349 480-486

Producer-Consumer Model

Producer-Consumer Model

Producers generate JobCall instances, which are routed through the router to appropriate handlers, resulting in JobResult instances that are then processed by consumers.

  • Producer: A component that generates job calls (implements Stream<Item = Result<JobCall, E>>)
  • Consumer: A component that processes job results (implements Sink<JobResult>)
  • Router: A component that routes job calls to appropriate handlers based on job IDs

Runner Configuration

The Blueprint Runner is built using a fluent builder pattern that allows for flexible configuration.

Builder Pattern

// Code snippet for builder pattern would be included here

Configuration Management

Environment Configuration

The runner requires a BlueprintEnvironment that defines the context in which it operates:

influences
BlueprintEnvironment
+String http_rpc_endpoint
+String ws_rpc_endpoint
+String keystore_uri
+Option<PathBuf> data_dir
+ProtocolSettings protocol_settings
+bool test_mode
+load() : Result<BlueprintEnvironment, ConfigError>
+keystore() : Keystore
ProtocolSettings
+None
+Symbiotic
+Tangle(TangleProtocolSettings)
+Eigenlayer(EigenlayerProtocolSettings)
+protocol() : &str
Protocol
+Tangle
+Eigenlayer
+Symbiotic
+from_env() : Result<OptionUnsupported markdown: del
+as_str() : &str
TangleProtocolSettings
EigenlayerProtocolSettings

Sources: 210-241 36-52 105-114

Error Handling

Error Handling

The Blueprint Runner provides comprehensive error handling through a hierarchy of error types:

RunnerError
+ NoRouter
+ NoProducers
+ Keystore(KeystoreError)
+ Networking(NetworkingError)
+ Io(std::io::Error)
+ Config(ConfigError)
+ BackgroundService(String)
+ JobCall(JobCallError)
+ Producer(ProducerError)
+ Consumer(BoxError)
+ Tangle(TangleError)
+ Eigenlayer(EigenlayerError)
+ Other(Box<dyn Error>)

ConfigError
+ MissingTangleRpcEndpoint
+ MissingKeystoreUri
+ MissingBlueprintId
+ MissingServiceId
+ MalformedBlueprintId
+ MalformedServiceId
+ UnsupportedKeystoreUri
+ UnsupportedProtocol
+ UnexpectedProtocol
+ NoSr25519Keypair
+ InvalidSr25519Keypair
+ NoEcdsaKeypair
+ InvalidEcdsaKeypair
+ TestSetup
+ MissingEigenlayerContractAddresses
+ MissingSymbioticContractAddresses
+ Other(Box<dyn Error>)

JobCallError
+ JobFailed(Box<dyn Error>)
+ JobDidntFinish(JoinError)

ProducerError
+ StreamEnded
+ Failed(Box<dyn Error>)

Protocol Integration

Protocol-Specific Configuration

The runner supports different blockchain protocols through the BlueprintConfig trait.

Tangle Protocol

For Tangle Network, the runner handles:

  1. Operator registration with the network
  2. Service instantiation
  3. Job execution for Tangle-specific operations

Key components include:

  • TangleConfig: Configuration for Tangle protocol
  • TangleProtocolSettings: Protocol-specific settings
  • register_impl: Implementation of operator registration

Code Snippets

// Example of operator registration implementation
fn register_impl() {
    // Registration logic here
}

Sources:

Eigenlayer Protocol

Eigenlayer Protocol Registration Process

Eigenlayer support includes both BLS and ECDSA cryptography options for operator registration:

// ECDSA Registration
impl BlueprintConfig {
    fn requires_registration_ecdsa_impl(&self) -> bool;
    fn register_ecdsa_impl(&self);
}

// BLS Registration
impl BlueprintConfig {
    fn requires_registration_bls_impl(&self) -> bool;
    fn register_bls_impl(&self);
}

Eigenlayer Protocol Integration

For Eigenlayer, the runner supports both ECDSA and BLS cryptography:

  1. ECDSA configuration through EigenlayerECDSAConfig
  2. BLS configuration through EigenlayerBLSConfig
  3. Contract interaction through Alloy and EigenSDK libraries

Error Handling and Recovery

The Blueprint Runner implements a robust error handling system that distinguishes between different types of errors:

  1. Configuration errors: Issues with the environment or protocol settings
  2. Execution errors: Problems during job execution
  3. Protocol-specific errors: Issues related to specific blockchain protocols

Documentation

Introduction

No significant content available.

Getting Started

Getting Started with Tangle Network Blueprint

Core Components

Service Spawning and Source Fetching

The system supports multiple fetcher types:

  1. GitHub Fetcher: Downloads binaries from GitHub releases.
  2. Container Source: Pulls and runs Docker/Podman containers.
  3. Testing Fetcher: Builds binaries from local source code (primarily for testing).

Each source implementation provides methods to:

  • Fetch the blueprint (downloading, building, or pulling).
  • Spawn processes or containers.
  • Monitor execution status.
  • Handle process termination.

Key Functions

  • fetch(): Initiates the fetching process.
  • get_fetcher_candidates(): Checks source type and retrieves appropriate fetchers.
  • spawn(): Runs the process/container and returns a ProcessHandle.
  • Active Gadgets: Manages the active processes and their statuses.

Fetching Process

  1. Fetch Blueprint:

    • Download binary
    • Pull image
    • Build from source
    • Make executable
    • Ready to spawn
  2. Run Process/Container:

    • Utilize spawn() to execute the process/container.

Event Handling Mechanism

Event Handling

Relevant Source Files

Event Flow Overview

The Blueprint Manager subscribes to network events and processes them to maintain the correct state of running blueprints and services. This dynamic execution is based on on-chain activity.

ProcessHandle Structure

The ProcessHandle structure provides a consistent interface for monitoring and controlling processes:

Method Description
status() Returns the current status of the process
wait_for_status_change() Asynchronously waits for the process status to change
abort() Sends a termination signal to the process

Event Handling Initialization

When the Blueprint Manager starts, it performs an initialization sequence to establish its event handling loop:

loop[Event Loop]
run_blueprint_manager()
Initialize keystore
Create source candidates
Initialize Tangle client
Create services client
Request initial event
Return initial event
Query operator blueprints
check_blueprint_events()
handle_tangle_event()
Begin event loop
next_event()
check_blueprint_events()
handle_tangle_event()

This process involves:

  1. Setting up the keystore for cryptographic operations
  2. Detecting available source candidates (container, native, etc.)
  3. Creating clients for interacting with the Tangle Network
  4. Processing the initial state from the chain
  5. Starting the event loop for continuous monitoring

Blueprint Management

Blueprint Verification and Service Management

When events indicate a blueprint needs attention, the manager verifies the blueprint and manages its services. This involves fetching the blueprint from appropriate sources and spawning instances of services.

Active Services
Blueprint Sources
Blueprint Manager
Tangle Network
check_blueprint_events
needs_update=true
blueprint_registrations
VerifiedBlueprint
fetch & spawn
Tangle Events
Event Polling Loop
Event Processor
Blueprint Verifier
Service Starter
Github Source
Container Source
Testing Source
Running Gadgets
EventPollResult

VerifiedBlueprint Structure

The VerifiedBlueprint structure manages the lifecycle of a blueprint from verification to service execution. It contains:

  • Fetchers for different source types (GitHub, Container, Testing)
  • The blueprint definition with metadata and service IDs
  • Methods to start services and manage their lifecycle

Event Handling Logic

alt[needs_update == true]
loop[for each service]
alt[fetch successful]
loop[for each fetcher]
loop[for each blueprint]
Process event
Return EventPollResult
Pass event & result
Check available sources
Query updated blueprints
Create VerifiedBlueprint
Prepare fetcher candidates
start_services_if_needed()
fetch()
spawn()
Add to active gadgets
Cleanup stale services

Blueprint Source Selection and Fetching

When a blueprint needs to be instantiated, the Blueprint Manager selects appropriate sources based on the blueprint definition and system capabilities.

Sources

Event Types and Processing

Event Types and Processing

The Blueprint Manager processes several event types from the Tangle Network, each triggering specific behaviors. The event processing is handled by the check_blueprint_events function, which examines the events in each block and determines the appropriate actions:

  1. PreRegistration: Adds blueprint IDs to the registration queue when the operator is selected.
  2. Registered: Signals that a blueprint has been registered and services need to be updated.
  3. Unregistered: Triggers cleanup of services associated with the unregistered blueprint.
  4. ServiceInitiated: Indicates that a new service has been started for a blueprint.
  5. JobCalled: Logs when a job is called (primarily for informational purposes).
  6. JobResultSubmitted: Logs when a job result is submitted (primarily for informational purposes).

Service Lifecycle Management

Service Lifecycle Management

The Blueprint Manager tracks all running services and manages their lifecycle based on on-chain events.

Key States

  • Operator selected
  • Blueprint registered
  • Event processing
  • Fetch error
  • Fetch successful
  • Service started
  • Blueprint unregistered
  • Service crashed
  • Registration mode
  • Auto-restart
  • PreRegistration
  • Registered
  • Fetching
  • Failed
  • ServiceStarting
  • Running
  • Unregistered
  • Error
  • Finished

Key Aspects

  1. Status Monitoring: Services report their status (Running, Finished, Error).
  2. Auto-Restart: Failed services are restarted on the next event cycle.
  3. Cleanup: Services for unregistered blueprints are terminated and removed.
  4. Registration Mode: Special execution mode where services run once and exit.

Configuration and Environment Setup

Environment and Configuration

The Blueprint Manager uses configuration from multiple sources to set up event handling:

  1. BlueprintManagerConfig: Core configuration for the manager
  2. BlueprintEnvironment: Environment configuration for blueprints
  3. SourceCandidates: Available sources for fetching blueprints

Environment Variables

When services are spawned, they receive environment variables and command-line arguments that allow them to connect to the appropriate resources:

Environment Variable Purpose
HTTP_RPC_URL HTTP RPC endpoint for the Tangle Network
WS_RPC_URL WebSocket RPC endpoint for the Tangle Network
KEYSTORE_URI Path to the keystore for cryptographic operations
BLUEPRINT_ID ID of the blueprint being executed
SERVICE_ID ID of the specific service within the blueprint
PROTOCOL Protocol the service should use (Tangle, EVM, etc.)
DATA_DIR Directory for service data storage
REGISTRATION_MODE_ON Flag indicating if the service is in registration mode

Sources:

Resilience Features

Error Handling and Resilience

The event handling system is designed to be resilient to failures:

  1. Source Selection Fallbacks: If a preferred source fails, the system tries alternative sources.
  2. Service Recovery: Failed services are detected and can be restarted.
  3. Error Logging: Comprehensive error logging provides visibility into issues.
  4. Graceful Shutdown: Services are properly terminated when the manager shuts down.

Sources:

Documentation

Introduction to Tangle Network's Blueprint Framework

No significant content available.

Architecture Overview

Key Sections of Tangle Network Blueprint Documentation

Core Components of Blueprint

Core Components of Blueprint

BlueprintSourceHandler Trait

  • Methods:
    • +fetch() : Result<...>: Retrieves the blueprint code or binary from its source.
    • +spawn(source_candidates, env, service, args, env_vars) : Result<ProcessHandle>: Creates a running process for the blueprint.
    • +blueprint_id() : u64: Returns the unique ID of the blueprint.
    • +name() : String: Returns a human-readable name for the source.

ProcessHandle

  • Fields:
    • -status: UnboundedReceiver<Status>
    • -cached_status: Status
    • -abort_handle: oneshot::Sender<...>
  • Methods:
    • +new(status, abort_handle) : ProcessHandle
    • +status() : Status
    • +wait_for_status_change() : Option<Status>
    • +abort() : bool

Status Enumeration

  • Running
  • Finished
  • Error

Sources: 11-66

Understanding Blueprint Sources

Blueprint Sources

Relevant Source Files

Overview of Blueprint Sources

Blueprint Sources are responsible for:

  1. Fetching blueprint code/binaries from various locations
  2. Preparing the execution environment
  3. Spawning and monitoring blueprint processes
  4. Managing process lifecycle and cleanup

For more details on the Blueprint Manager that orchestrates these sources, refer to the Blueprint Manager.

Types of Blueprint Sources

Source Implementation Interface

All blueprint sources implement the BlueprintSourceHandler trait, which defines the standard interface for source operations.

GitHub Sources

GitHub sources fetch pre-built binary executables from GitHub releases. They handle platform-specific binary selection, download, verification, and execution.

1. fetch()
2. get_binary()
3. Check if exists
   Yes -> Verify hash
   No -> Download from GitHub
4. Verify hash
5. make_executable()
6. spawn()

Key components:

  • GithubBinaryFetcher: Handles GitHub-specific download and execution logic
  • Binary selection based on platform (OS and architecture)
  • Hash verification to ensure binary integrity
  • Execution with appropriate permissions

Container Sources

Container sources pull and run Docker/Podman container images, providing an isolated execution environment.

Sources: [52-66](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/manager/src/sources/mod.rs#L52-L66) 
[30-131](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/manager/src/executor/event_handler.rs#L30-L131)
Sources: [33-126](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/manager/src/sources/github.rs#L33-L126)

Process Management with ProcessHandle

The ProcessHandle provides a standardized way to interact with running blueprint processes, supporting status monitoring and graceful termination.

Source Types

The Blueprint framework supports three primary source types:

Source Type Description Use Case Configuration
GitHub Fetches pre-built binaries from GitHub releases Production deployments GithubFetcher struct
Container Pulls and runs Docker/Podman container images Isolated deployments ImageRegistryFetcher struct
Test Builds executables from local source code Development and testing TestFetcher struct

Sources: 15-21 19-35 12-29

Managing Docker Containers

Managing Docker Containers

  1. Fetching Docker Images

    • fetch()
    • docker pull
    • DockerBuilder
    • Prepare Environment
      • Adjust URLs for container network
      • Mount keystore volume
    • Start Container
      • spawn()
      • Monitor container status via Docker API
      • Graceful shutdown and cleanup
    // Example of fetching and starting a Docker container

    Key Considerations:

    • Network mapping for services (host.containers.internal)
    • Keystore volume mounting for secure access
    • Status monitoring through Docker API
  2. Test Sources

    • Building Executables
      • fetch()
      • Parse package info
      • cargo build
      • Make executable
      • spawn()
    • TestSourceFetcher
      • Determine repository root
      • Parse cargo package info
      • Build executable from source
      • Set executable permissions
      • Execute binary with arguments
    // Example of building and executing a test source

Local Source Execution with TestSourceFetcher

Key Components

  • TestSourceFetcher: Builds and executes from local source.
  • Cargo integration: Supports Rust blueprint compilation.
  • Development-focused: Includes support for hot reloading.

Source Selection Process

When a blueprint needs to be executed, the Blueprint Manager follows this source selection process:

Trigger
Query
Contains
Try best match
Yes
No
Monitor
Tangle Network Event
Blueprint Manager
On-chain Blueprint Data
List of Available Sources
Source Selection
Available Source Handlers
User preferred source type
Source available?
Fetch and Spawn Blueprint
Try next source
Handle Process Status

Source Selection Logic for On-Chain Blueprints

Source Selection Logic

  1. Gathers all source candidates from on-chain blueprint definition.
  2. Filters based on available source handlers on the system.
  3. Prioritizes based on user preference (if specified).
  4. Falls back to next available source if primary fails.
  5. Handles test mode separately with specific requirements.

Source Configuration

Sources can be configured through various mechanisms:

SourceCandidates

The SourceCandidates struct determines which source technologies are available on the system:

pub struct SourceCandidates {
    pub container: Option<Url>,      // Docker/Podman socket URL
    pub wasm_runtime: Option<String>, // WASM runtime path if available
    pub preferred_source: SourceType, // User's preferred source type
}

CLI Configuration Options

CLI Configuration

The CLI provides options for configuring source preferences:

--preferred-source, -p <SOURCE_TYPE>
    The preferred source type to use (container, native, wasm) [default: native]

--podman-host, -p <URL>
    The location of the Podman/Docker socket [default: unix:///var/run/docker.sock]

Process Lifecycle Management

Each source implementation is responsible for spawning and managing the lifecycle of its processes:

ProcessHandle "Blueprint Process" "Source Handler" "Blueprint Manager"
alt[Status == Error]
loop[Monitor]
alt[Shutdown]
fetch() Result<()> spawn(args, env_vars) Create Process ProcessHandle
status() Status abort() Kill abort() Kill

Standardized Environment Variables

Environment Variables and Arguments

All blueprint sources receive a standardized set of environment variables and command-line arguments:

Variable Description Example
HTTP_RPC_URL HTTP RPC endpoint URL http://127.0.0.1:9944
WS_RPC_URL WebSocket RPC endpoint URL ws://127.0.0.1:9944
KEYSTORE_URI Path to the keystore ./keystore
BLUEPRINT_ID The blueprint ID 42
SERVICE_ID The service ID 1
PROTOCOL The protocol (Tangle, Eigenlayer) tangle
CHAIN The chain to connect to testnet
DATA_DIR Directory for blueprint data ./data/blueprint-42-service

Integration with Blueprint Manager

Integration with Blueprint Manager

The Blueprint Sources are tightly integrated with the Blueprint Manager:

Source System
Blueprint Manager
Trigger fetch
Create Sources
Select
Select
Select
Create
Create
Create
Status Updates
Start/Stop  
Services
Event Handler
Blueprint Verifier
Service Manager
Source Handler
GitHub Source
Container Source
Test Source
Process Monitor

Sources: 250-393

The Manager initiates source operations based on:

  • Chain events indicating new/updated blueprints
  • Service requests and lifecycle events
  • Registration requirements for services

Conclusion and Future Enhancements

Conclusion

Blueprint Sources provide a flexible mechanism for fetching, building, and running blueprints from various locations. The modular design allows for different deployment strategies while maintaining consistent environment variables and process management. The source system is designed to be extensible, allowing for new source types to be added in the future as needed. Auto-refresh not enabled yet.

Documentation

Introduction

No significant content available.

Getting Started

Getting Started with Tangle Network Blueprint

Core Components

Features

The context-derive crate provides macros with the following features:

  • Standard context derivation (std feature)
  • Tangle-specific context derivation (tangle feature)
  • EVM-specific context derivation (evm feature)
  • Networking-specific context derivation (networking feature)

These macros complement the main FromRef derive macro by providing more specialized context trait implementations.

Usage in Different Protocols

  • Standard Context Derives
  • Protocol-Specific Derives
  • Tangle Context Derives
  • EVM Context Derives
  • Networking Context Derives
  • Tangle Jobs
  • EVM Jobs
  • Networking Jobs

Sources: 1-53

Macro System Overview

Macro System Overview

Relevant Source Files

Overview of Macros

The Blueprint framework provides several custom macros:

  1. debug_job: Improves error messages for job functions.
  2. FromRef: Derive macro for implementing context extraction.
  3. load_abi: Loads Ethereum contract ABIs (available with the evm feature).

Benefits

  • Better error messages
  • Automatic context extraction
  • ABI integration

Example Usage

#[debug_job]
#[derive(FromRef)]
load_abi!()

Integration with Blueprint SDK

The macro system enhances the runtime experience with compile-time tools.

Additional Context-Derive Macros

Includes a separate blueprint-context-derive crate with additional procedural macros for deriving Context Extension traits.

Using Macros

Checks Performed

  • Job Function: Macro Processing
  • Job Constraints Checks:
    • Valid Job?
      • Yes: Pass Through Unchanged
      • No: Generate Specific Error
    • Is Async Function?
    • Valid Context Type?
    • Compatible Return Type?

FromRef Derive Macro

The FromRef derive macro automatically implements the FromRef trait for each field in a struct, useful for extracting specific components from a larger context object.

Code Snippet

// Sources: [31-170](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/macros/src/lib.rs#L31-L170)
// [debug_job.rs](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/macros/src/debug_job.rs)

Debugging with debug_job

Job Debugging Macro

The debug_job attribute macro significantly improves error messages when working with job functions, making it easier to understand why a function doesn't correctly implement the Job trait.

Purpose

In the Blueprint framework, job functions need to satisfy certain constraints:

  • They must be async functions
  • They need appropriate parameter types
  • They must have compatible return types

Usage

Apply the debug_job attribute to any function meant to be used as a job:

use blueprint_sdk::macros::debug_job;

#[debug_job]
async fn my_job() -> &'static str {
    "Hello, world"
}

Context Type Specification

The debug_job macro can automatically infer the context type from a Context parameter:

use blueprint_sdk::extract::Context;

#[debug_job]
async fn job(Context(context): Context<AppContext>) {
    // The macro infers AppContext as the context type
}

You can also explicitly specify the context type:

#[debug_job(context = AppContext)]
async fn job(Context(app_ctx): Context<AppContext>, Context(inner_ctx): Context<InnerContext>) {
    // ...
}

Limitations

  • The macro doesn't work for functions in an impl block that don't have a self parameter
  • It has no effect when compiled with the release profile

Context Management with FromRef

Usage

Apply the #[derive(FromRef)] attribute to a struct that serves as a context container:

use blueprint_sdk::extract::{Context, FromRef};

#[derive(FromRef, Clone)]
struct AppContext {
    keystore: Keystore,
    database_pool: DatabasePool,
    #[from_ref(skip)]
    secret_token: String,
}

Once implemented, you can extract individual components using the Context extractor:

async fn job(Context(keystore): Context<Keystore>) {
    // Only extract the keystore from AppContext
}

async fn another_job(Context(database): Context<DatabasePool>) {
    // Only extract the database pool from AppContext
}

This pattern enables clean dependency injection where each job function only accesses the components it needs.

Load ABI Macro for EVM Integration

The load_abi procedural macro is available when the evm feature is enabled. It loads the JSON ABI (Application Binary Interface) for an Ethereum smart contract.

Usage

use blueprint_sdk::macros::load_abi;

Loading ABIs with load_abi

// Load ABI from a file path
const CONTRACT_ABI: &str = load_abi!("path/to/contract.json");

EVM Integration Flow

  1. Load ABI Macro: load_abi!()
  2. Parse JSON
  3. Validate ABI Format
  4. Embed in Compiled Code
  5. EVM Contract Client
  6. Send Transactions
  7. Call Contract Functions

Macro System Architecture

The Blueprint macro system is implemented as a set of procedural macros that operate at compile time.

Internal Structure


## Best Practices

## Best Practices
When working with the Blueprint Macro System, consider these best practices:

1. **Use `debug_job` during development**:
   - Apply it to job functions to get better error messages.
   - Remove it in production for optimal performance (it automatically disables itself in release builds).

2. **Organize context with `FromRef`**:
   - Keep context objects clean and modular.
   - Use `#[from_ref(skip)]` for sensitive fields that shouldn't be accessible directly.
   - Ensure all extractable fields implement `Clone`.

3. **When working with EVM contracts**:
   - Use `load_abi!` to ensure compile-time validation of ABI files.
   - Keep ABI files in a standard location for consistency.

4. **Testing with macros**:
   - Test derived implementations to ensure they behave as expected.
   - Be aware that some macros might have different behavior in debug and release builds.

## Troubleshooting

- **Issue**: Syntax error in textmermaid version 11.6.0
- **Action**: Ask Devin about tangle-network/blueprint

## https://deepwiki.com/tangle-network/blueprint/8.3-custom-protocol-integration

## Documentation

## Introduction to Tangle Network's Blueprint Framework

### Tangle Network's Blueprint Framework Overview
- **Overview**: [Overview](https://deepwiki.com/tangle-network/blueprint/1-overview)
- **Architecture Overview**: [Architecture Overview](https://deepwiki.com/tangle-network/blueprint/1.1-architecture-overview)
- **Key Concepts**: [Key Concepts](https://deepwiki.com/tangle-network/blueprint/1.2-key-concepts)
- **Protocol Support**: [Protocol Support](https://deepwiki.com/tangle-network/blueprint/1.3-protocol-support)

### Getting Started
- **Installation**: [Installation](https://deepwiki.com/tangle-network/blueprint/2.1-installation)
- **Creating Your First Blueprint**: [Creating Your First Blueprint](https://deepwiki.com/tangle-network/blueprint/2.2-creating-your-first-blueprint)
- **Example Blueprints**: [Example Blueprints](https://deepwiki.com/tangle-network/blueprint/2.3-example-blueprints)

### Core Components
- **Blueprint SDK**: [Blueprint SDK](https://deepwiki.com/tangle-network/blueprint/3-blueprint-sdk)
- **Job System**: [Job System](https://deepwiki.com/tangle-network/blueprint/3.2-job-system)
- **Router**: [Router](https://deepwiki.com/tangle-network/blueprint/3.3-router)
- **Networking**: [Networking](https://deepwiki.com/tangle-network/blueprint/3.4-networking)
- **Keystore**: [Keystore](https://deepwiki.com/tangle-network/blueprint/3.5-keystore)

### Blueprint Runner
- **Runner Configuration**: [Runner Configuration](https://deepwiki.com/tangle-network/blueprint/4.1-runner-configuration)
- **Job Execution Flow**: [Job Execution Flow](https://deepwiki.com/tangle-network/blueprint/4.2-job-execution-flow)

### Blueprint Manager
- **Event Handling**: [Event Handling](https://deepwiki.com/tangle-network/blueprint/5.1-event-handling)
- **Blueprint Sources**: [Blueprint Sources](https://deepwiki.com/tangle-network/blueprint/5.2-blueprint-sources)

### CLI Reference
- **Blueprint Commands**: [Blueprint Commands](https://deepwiki.com/tangle-network/blueprint/6.1-blueprint-commands)
- **Key Management Commands**: [Key Management Commands](https://deepwiki.com/tangle-network/blueprint/6.2-key-management-commands)
- **Deployment Options**: [Deployment Options](https://deepwiki.com/tangle-network/blueprint/6.3-deployment-options)

### Development
- **Build Environment**: [Build Environment](https://deepwiki.com/tangle-network/blueprint/7.1-build-environment)
- **Testing Framework**: [Testing Framework](https://deepwiki.com/tangle-network/blueprint/7.2-testing-framework)
- **CI/CD**: [CI/CD](https://deepwiki.com/tangle-network/blueprint/7.3-cicd)

### Advanced Topics
- **Networking Extensions**: [Networking Extensions](https://deepwiki.com/tangle-network/blueprint/8.1-networking-extensions)
- **Macro System**: [Macro System](https://deepwiki.com/tangle-network/blueprint/8.2-macro-system)
- **Custom Protocol Integration**: [Custom Protocol Integration](https://deepwiki.com/tangle-network/blueprint/8.3-custom-protocol-integration)

## Understanding the Architecture

### Dependencies
- **Blueprint Runner**
- **Client Architecture**
  - `blueprint-client-core`
    - **Client trait**
    - **Context trait**
  - `blueprint-client-tangle`
  - `blueprint-client-evm`
  - `blueprint-client-eigenlayer`
  - **Custom Protocol Client**
    - `TangleContext`
    - `EVMContext`
    - `EigenlayerContext`
    - **Custom Protocol Context**
- **Client Registry**
  - `tangle-subxt`
  - `alloy`
  - `eigensdk`
  - **Custom Protocol SDK**

### Code Snippets
```toml
# Cargo.toml dependencies
Sources: 
- [blueprint-client-core](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/clients/core/Cargo.toml)
- [blueprint-client-tangle](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/clients/tangle/Cargo.toml#L15-L30)
- [blueprint-client-evm](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/clients/evm/Cargo.toml#L15-L41)
- [blueprint-client-eigenlayer](https://github.com/tangle-network/blueprint/blob/af8278cb/crates/clients/eigenlayer/Cargo.toml#L15-L42)

Core Components of the Blueprint Framework

Core Client Components

The Blueprint client system is based on two key abstractions:

  1. Client Core Interface: Defines the core functionality that all protocol clients must implement.
  2. Context Providers: Protocol-specific contexts that provide access to blockchain functionality.

Client Interface

At the heart of the protocol integration is the Client trait defined in the blueprint-client-core crate. This trait defines the essential methods that all protocol implementations must provide.

Client
+deploy(specs, options) : Future<DeploymentResult>
+register(options) : Future<RegisterResult>
+interact(interaction, options) : Future<InteractionResult>
+query(options) : Future<QueryResult>
+create_context() : Future<Context>

TangleClient
EVMClient
EigenlayerClient
CustomProtocolClient

Context
+protocol_name() : String
+chain_id() : ChainId

TangleContext
EVMContext
EigenlayerContext
CustomProtocolContext

Integrating Custom Protocols

Architecture Overview of Tangle Blueprint

This document provides guidance on extending the Tangle Blueprint framework to support custom blockchain protocols beyond the built-in support for Tangle Network, EVM, and Eigenlayer.

Architecture Overview

The Blueprint framework uses a modular client architecture that enables seamless integration with different blockchain protocols. Each protocol integration follows a common pattern while allowing for protocol-specific implementation details.

Essential Source Files and Locations

Custom Protocol Integration

Relevant source files:

Creating a Custom Protocol Client

Implementing a Custom Protocol Client

To implement support for a custom blockchain protocol, you need to create a new client crate that implements the core client interface for your protocol.

Step 1: Create a New Client Crate

Start by creating a new crate for your custom protocol client:

blueprint-client-myprotocol/
├── Cargo.toml
└── src/
    ├── lib.rs
    ├── client.rs
    ├── context.rs
    └── types.rs

The Cargo.toml file should include dependencies on the core Blueprint components and your protocol-specific dependencies:

[dependencies]
blueprint-core = { workspace = true }
blueprint-std = { workspace = true }
blueprint-client-core = { workspace = true }

# Protocol-specific dependencies
myprotocol-sdk = "x.y.z"

Updating the Client Registry

Step 1: Update the Client Registry

Modify the client registry to include your custom protocol client:

register_client
with_client
get_client::()
ClientRegistry
TangleClient
EVMClient
EigenlayerClient
MyProtocolClient
Blueprint Runner

Step 2: Add Feature Flags

Add feature flags for your protocol in the Blueprint SDK:

[features]
# Existing features
tangle = [...]
evm = [...]
eigenlayer = [...]

# Your custom protocol
myprotocol = [
    "dep:blueprint-client-myprotocol",
    "blueprint-contexts/myprotocol",
    "blueprint-context-derive?/myprotocol",
    "blueprint-testing-utils?/myprotocol",
    "blueprint-runner/myprotocol",
]

Integrating with the Blueprint CLI

Integration with CLI

To make your protocol available through the Blueprint CLI, update the CLI package:

// Add support for your protocol in the CLI commands
#[derive(clap::ValueEnum, Clone, Debug)]
pub enum Protocol {
    Tangle,
    EVM,
    Eigenlayer,
    MyProtocol, // Add your protocol here
}

Testing Your Protocol Integration

Test your protocol integration by creating a simple Blueprint that uses your custom protocol. Ensure that:

  1. The client can be registered with the Runner
  2. Deployment operations work correctly
  3. Interactions with the blockchain function as expected
  4. Contexts provide the correct protocol-specific functionality
#[test]
fn test_custom_protocol_integration() {
    let runner = BlueprintRunner::new()
        .with_client(MyProtocolClient::new(...))
        .build();
}

Summary

Integrating a custom blockchain protocol with the Tangle Blueprint framework involves:

  1. Implementing the Client trait for your protocol
  2. Creating a context provider for protocol-specific functionality
  3. Adding any necessary cryptographic support
  4. Creating testing utilities
  5. Updating the Blueprint SDK and CLI to support your protocol

By following the patterns established by existing protocol implementations (Tangle, EVM, and Eigenlayer), you can seamlessly extend the Blueprint framework to support additional blockchain protocols.

Implementing the Client Trait

Step 2: Implement the Client Trait

In client.rs, implement the Client trait for your custom protocol:

use blueprint_client_core::{Client, DeploymentResult, RegisterResult, InteractionResult, QueryResult};
use async_trait::async_trait;

pub struct MyProtocolClient {
    // Protocol-specific fields
}

#[async_trait]
impl Client for MyProtocolClient {
    async fn deploy(&self, specs: DeploymentSpecs, options: DeploymentOptions) -> Result<DeploymentResult, Error> {
        // Implementation for deploying to your protocol
    }

    async fn register(&self, options: RegisterOptions) -> Result<RegisterResult, Error> {
        // Implementation for registering with your protocol
    }

    async fn interact(&self, interaction: Interaction, options: InteractionOptions) -> Result<InteractionResult, Error> {
        // Implementation for interacting with your protocol
    }

    async fn query(&self, options: QueryOptions) -> Result<QueryResult, Error> {
        // Implementation for querying your protocol
    }

    async fn create_context(&self) -> Result<Box<dyn Context>, Error> {
        // Create and return a protocol-specific context
        Ok(Box::new(MyProtocolContext::new(...)))
    }
}

Developing Context Providers

Step 3: Implement the Context Provider

In context.rs, implement a context provider for your protocol:

use blueprint_contexts::Context;

pub struct MyProtocolContext {
    // Protocol-specific state
}

impl Context for MyProtocolContext {
    fn protocol_name(&self) -> String {
        "myprotocol".to_string()
    }

    fn chain_id(&self) -> ChainId {
        // Return the appropriate chain ID
        // Additional protocol-specific methods
    }
}

Step 3: Update Context Providers

Add your protocol to the context providers:

[features]
# Existing features
evm = ["blueprint-clients/evm"]
eigenlayer = ["blueprint-clients/eigenlayer"]
tangle = ["dep:tangle-subxt", "blueprint-clients/tangle"]

# Your custom protocol
myprotocol = ["blueprint-clients/myprotocol"]

Protocol-Specific Components

Depending on your protocol, you may need to implement additional components:

Cryptography Support

For protocol-specific cryptography, create a new crate in the crypto module:

blueprint-crypto-myprotocol/
├── Cargo.toml
└── src/
    ├── lib.rs
    └── keys.rs

Implement the necessary cryptographic primitives for your protocol, such as key generation, signing, and verification.

Testing Custom Protocol Clients

Testing Utilities

Create testing utilities for your protocol:

blueprint-myprotocol-testing-utils/
├── Cargo.toml
└── src/
    ├── lib.rs
    └── mock.rs

These utilities should provide mocks and helpers for testing blueprints that use your protocol.

Example Implementation Flow

Here's a complete flow for implementing and integrating a custom protocol:

[Implementation details not provided due to unsupported markdown]

Best Practices for Integration

// Test deployment
let result = runner.deploy_to::<MyProtocolClient>(...).await;
assert!(result.is_ok());

// Test interaction
let result = runner.interact_with::<MyProtocolClient>(...).await;
assert!(result.is_ok());

Best Practices

  1. Maintain Consistent Interfaces: Ensure your client implementation follows the same patterns as the existing protocol implementations.
  2. Comprehensive Testing: Create thorough tests for all aspects of your protocol integration.
  3. Error Handling: Provide clear, protocol-specific error types and comprehensive error handling.
  4. Documentation: Document protocol-specific behaviors and limitations.
  5. Feature Isolation: Use feature flags to ensure your protocol implementation is only included when needed.

Documentation

Introduction to Tangle Network's Blueprint Framework

No significant content available.

Getting Started

Getting Started with Tangle Network Blueprint

Setting Up the Development Environment

Prerequisites

Before setting up the build environment, ensure you have the following prerequisites installed on your system:

  • Git
  • A Unix-like operating system (Linux, macOS) or Windows with WSL
  • Internet connectivity for downloading dependencies

For the recommended Nix-based setup, you'll need:

For manual setup, you'll need:

  • Rust 1.86 or later
  • OpenSSL development libraries
  • GMP (GNU Multiple Precision Arithmetic Library)
  • Protocol Buffers compiler
  • Node.js 22 (for TypeScript/web components)

Build Environment Options

There are two primary methods for setting up the build environment:

Protocol Dependencies
Development Tools
Components
Setup Options
Recommended
Alternative
Nix Development Environment
Complete Environment
Manual Setup
Rust Toolchain (1.86)
Development Tools
Protocol Dependencies
Cargo Extensions
Foundry (Ethereum)
Node.js Ecosystem
OpenSSL
GMP Library
Protocol Buffers

Nix Development Environment Setup

Nix-based Setup (Recommended)

The project provides a Nix flake for setting up a consistent development environment across all supported platforms. This is the recommended approach as it ensures all dependencies are correctly configured and versioned.

To enter the development environment with Nix:

  1. Clone the repository:

    git clone https://github.com/tangle-network/blueprint
    cd blueprint
  2. Enter the Nix development shell:

    nix develop

This sets up a complete environment with all required tools and dependencies.

Manual Development Environment Setup

Manual Setup

If you prefer not to use Nix, you can manually install the required dependencies:

  1. Install Rust 1.86 using rustup:

    curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
    rustup default 1.86
    rustup component add cargo rustfmt clippy rust-src
  2. Install system dependencies (examples shown for Ubuntu):

    sudo apt update
    sudo apt install build-essential pkg-config libssl-dev libgmp-dev protobuf-compiler clang libclang-dev
  3. Install Node.js 22 for TypeScript components:

    # Using nvm or your preferred Node.js installation method
    nvm install 22
    npm install -g yarn typescript
  4. Install recommended Cargo extensions:

    cargo install cargo-nextest cargo-expand cargo-dist
  5. Install Foundry for Ethereum development:

    curl -L https://foundry.paradigm.xyz | bash
    foundryup

Rust Toolchain Configuration

Rust Toolchain Configuration

The project uses Rust 1.86 with specific components as defined in the rust-toolchain.toml file:

Required Components
- Rust Toolchain: Rust 1.86
- cargo
- rustfmt
- clippy
- rust-src

Development Environment Components

The Nix development environment provides a comprehensive set of tools and libraries for Blueprint development.

Development and Testing Tools

Build Environment

Relevant source files:

Keystore Storage Backends

The Blueprint framework supports multiple keystore storage backends:

  • InMemoryStorage
  • FileStorage (std)
  • SubstrateStorage (substrate-keystore)
  • AWS Signer
  • GCP Signer
  • Ledger

Key functions:

  • Keystore::new()
  • Local Storage
  • Remote Storage

Keystore Implementation

Keystore Implementation Sources

Key Concepts

  • The Keystore is responsible for securely managing cryptographic keys.
  • The storage module handles the persistence of keys.
  • Configuration settings dictate how the Keystore operates within the application.

Code Snippets

Refer to the linked files for specific implementations and examples.

Project Structure Overview

Project Structure and Build Artifacts

When working with the Blueprint project, several directories and files are created during the build process:

Directory/File Description
/target The main Rust build output directory containing compiled artifacts
/crates/*/target Individual crate build outputs (if built separately)
/node_modules Node.js dependencies for TypeScript/web components
/contracts/out Compiled smart contract artifacts from Foundry
/contracts/cache Foundry cache for smart contract compilation
blueprint.lock Lock file for Blueprint dependencies
blueprint.json Configuration file for Blueprint projects
/cache General cache directory for the project
.direnv Directory created by direnv when using Nix

Sources: 1-28

Testing Configuration with Nextest

Testing Configuration

The project uses Nextest for Rust test running with custom profiles:

Profile Description
ci Profile for continuous integration with immediate test failure output
serial Single-threaded test execution for tests that cannot run in parallel

To run tests with a specific profile:

cargo nextest run --profile ci

Error Handling and Cryptographic Features

The build environment supports various cryptographic backends through feature flags, affecting dependencies included at compile time and available functionality:

Storage Backend Features
Key Type Features
Cryptographic Features
Feature Flags
Enabled Key Types
Storage Backends
ecdsasr25519-schnorrkel
zebra (ed25519)
bls
bn254
sp-core
tangle
std (FileStorage)
substrate-keystore
aws-signer
gcp-signer
ledger-browser
ledger-node

Managing Project Features with Cargo

Managing Project Features with Cargo

You can enable or disable features using Cargo's feature flag system. For example:

cargo build --features "ecdsa sr25519-schnorrkel std"

Development Workflow

The typical development workflow involves these steps:

Passes
Fails
Deployment
cargo-tangle
Deploy Commands
Test Commands
cargo nextest run
cargo test
Build Commands
cargo build
cargo clippy
Enter Development Environment
Edit Code
Build Project
Run Tests
Prepare Deployment
Deploy to Target Network

Troubleshooting Common Issues

Common Issues and Troubleshooting

Issue Solution
Missing crypto feature Add the required feature to your build command or dependency specification
Keystore access errors Check file permissions if using FileStorage, or connectivity for remote keystores
Protocol buffer compile errors Ensure protobuf-compiler is installed and on your PATH
Build failures on macOS Ensure Apple frameworks are available (Security, SystemConfiguration)
Slow compilation On Linux, ensure mold linker is enabled; on macOS consider using lld

Sources: 11-117 53-56

Additional Development Tools

Additional Development Tools

For enhanced development experience, the environment includes:

  • rust-analyzer: Provides IDE integrations for Rust
  • cargo-expand: Useful for debugging macros by expanding them
  • cargo-dist: Creates distribution packages
  • taplo: TOML file formatter and linter
  • foundry: Ethereum development framework for smart contracts
  • TypeScript: For web component development

Documentation

Introduction to Tangle Network's Blueprint Framework

No substantial content available. Please refer to Devin for detailed information on the Tangle Network's Blueprint Framework.

Getting Started with Installation

Getting Started with Installation

Networking Extensions Overview

Introduction to Networking Extensions

Networking Extensions Overview

These extensions build upon the solid foundation of Blueprint's core networking layer, allowing developers to implement advanced distributed protocols without handling low-level networking complexities.

Key Sections

Sources

Overview of the Blueprint SDK

Blueprint's networking extensions enhance core networking functionality with specialized capabilities for distributed protocols. The SDK provides a modular networking architecture that can be extended with specialized components for different protocol requirements.

Available Extensions:

  1. Aggregated Signature Gossip Extension: Enables efficient collection and aggregation of cryptographic signatures.
  2. Round-Based Protocol Extension: Facilitates implementation of synchronous round-based protocols.

Key Networking Extensions

Networking Extensions

Relevant source files:

Enabling Networking Extensions in Blueprint

Enabling Extensions

To use these extensions in your Blueprint project, you need to add them as dependencies and enable the required features:

Aggregated Signature Gossip Extension

[dependencies]
blueprint-networking-agg-sig-gossip-extension = "0.1.0-alpha.3"
blueprint-crypto = { version = "0.1.0-alpha.4", features = ["aggregation"] }

Round-Based Protocol Extension

[dependencies]
blueprint-networking-round-based-extension = "0.1.0-alpha.4"
round-based = "0.1.0"

Conclusion

Conclusion

Blueprint's networking extensions provide specialized capabilities for building complex distributed protocols:

  • The Aggregated Signature Gossip Extension facilitates efficient signature collection and aggregation, essential for consensus protocols and threshold cryptography.
  • The Round-Based Protocol Extension enables implementation of synchronized multi-party computation protocols with well-defined rounds and message flows.

Aggregated Signature Gossip Extension

Aggregated Signature Gossip Extension

The Aggregated Signature Gossip Extension provides functionality for collecting and aggregating cryptographic signatures over a peer-to-peer network, useful for consensus algorithms and threshold signature schemes.

Architecture

Supported Signature Schemes
- BLS Signatures
- BN254 Curve
- libp2p::gossipsub

Features

The Aggregated Signature Gossip Extension provides:

  1. Efficient peer-to-peer distribution of signatures
  2. Aggregation of signatures from multiple participants
  3. Support for various signature schemes including BLS and BN254
  4. Integration with Blueprint's crypto library

Dependencies

The extension relies on the following components:

Dependency Purpose
blueprint-core Core functionality
blueprint-crypto Cryptographic operations (with aggregation feature)
blueprint-networking Network communication
libp2p Underlying p2p network stack

Sources:

Round-Based Protocol Extension

Round-Based Protocol Extension

The Round-Based Protocol Extension integrates with the round-based crate to support protocols that operate in synchronized rounds, particularly useful for multi-party computation (MPC) protocols.

Architecture

Message Flow
adapts
implements
sends/receives
transforms to
consumed by
uses
manages
blueprint-networking-round-based-extension
NetworkServiceHandle
round-based::Delivery trait
P2P Messages
Round Messages
Protocol Implementation
RoundsRouter
Round 1
Round 2

Integration Example

The extension provides an adapter (RoundBasedNetworkAdapter) that connects Blueprint's networking layer to the round-based protocol framework:

Protocol Messages
Protocol Implementation
add_round
broadcast
Protocol Execution
Generate local_randomness
Compute commitment = Sha256::digest(local_randomness)
Send CommitMsg
Receive all commitments
Send DecommitMsg
Receive all reveals
Verify commitments match revealed values
Combine values with XOR
RoundsRouter
RoundInput
CommitMsg { commitment: Output }
DecommitMsg { randomness: [u8; 32] }

Using the Round-Based Extension

To use the Round-Based Extension:

  1. Create a RoundBasedNetworkAdapter connecting to your Blueprint network.
  2. Create an MpcParty using the adapter.
  3. Define your protocol using the round-based framework.
  4. Execute the protocol with the MPC party.

Example from the test file:

// Create the adapter connecting Blueprint networking to round-based
let node_network = RoundBasedNetworkAdapter::new(
    network_handle,    // NetworkServiceHandle from Blueprint
    participant_id,    // Local participant index (0, 1, etc.)
    &participants,     // Mapping from participant IDs to peer IDs
    instance_id        // Protocol instance identifier
);

// Create an MPC party using the adapter
let mpc_party = MpcParty::connected(node_network);

// Run the protocol
let result = protocol_of_random_generation(
    mpc_party, 
    participant_id, 
    num_participants, 
    rng
).await;

Randomness Generation Protocol

Example: Randomness Generation Protocol

The test file rand_protocol.rs demonstrates a two-round randomness generation protocol implemented using the Round-Based Extension:

  1. Round 1 (Commit): Each party generates a random value and broadcasts a commitment (hash).
  2. Round 2 (Reveal): Each party reveals their random value.
  3. Output: All parties verify and combine the revealed values to produce shared randomness.

Key Functions:

  • create(network_handle, participant_id, participants, instance_id)
  • send(message)
  • send(peer_id, serialized_message)
  • transmit messages
  • send response
  • receive(message)
  • deliver(round_message)

Sources: 265-267

Conclusion and Further Research

No relevant content available.

Documentation

Introduction to Tangle Network's Blueprint Framework

No significant content available.

Architecture Overview

Key Sections of Tangle Network Blueprint Documentation

Command-Line Interface (CLI) Overview

Sources: CLI Implementation Code

Key Management Commands

Introduction to Key Management Commands

Key Management Commands

This page documents the key management commands available in the cargo-tangle CLI tool. These commands provide capabilities for generating, importing, exporting, and managing cryptographic keys used across the Tangle Blueprint framework.

Key Management Command Flow

cargo-tangle key
generate (g)
import (i)
export (e)
list (l)
generate-mnemonic (m)

Key Types and Parameters

  • Generate Key:
    • Command: generate
    • Parameters: Key Type, Seed, Output Path
  • Import Key:
    • Command: import
    • Parameters: Key Type, Secret, Keystore Path
  • Export Key:
    • Command: export
    • Parameters: Key Type, Public Key, Keystore Path
  • List Keys:
    • Command: list
    • Parameters: Keystore Path
  • Generate Mnemonic:
    • Command: generate-mnemonic
    • Parameters: Word Count

Relevant Source Files

Generating Cryptographic Keys

Generate Key

Command:

cargo tangle key generate [OPTIONS]

Options:

  • -t, --key-type <KEY_TYPE>: The type of key to generate (required)
  • -o, --output <OUTPUT>: Path to save the key to (optional)
  • --seed <SEED>: The seed to use for key generation (hex format without 0x prefix) (optional)
  • -v, --show-secret: Show the secret key in output (optional)

Example:

cargo tangle key generate -t sr25519 -o ./my-keys --show-secret

This command generates an Sr25519 key pair, saves it to the ./my-keys directory, and displays both the public and private keys.

Import Key

Command:

cargo tangle key import [OPTIONS]
cargo tangle k i [OPTIONS]

Importing Keys

Import Key

Options:

  • -t, --key-type <KEY_TYPE>: The type of key to import (optional)
  • -x, --secret <SECRET>: The secret key to import (hex format without 0x prefix) (optional)
  • -k, --keystore-path <KEYSTORE_PATH>: The path to the keystore (required)
  • -p, --protocol <PROTOCOL>: The protocol for the key (tangle, eigenlayer) (defaults to tangle)

If no key type is provided, the command will enter an interactive mode to prompt for the key type and value.

Example:

cargo tangle key import -t ecdsa -x 1a2b3c4d... -k ./keystore -p eigenlayer

This command imports an ECDSA private key into the specified keystore path for use with Eigenlayer.

Export Key

Command:

cargo tangle key export [OPTIONS]
cargo tangle k e [OPTIONS]

Exporting Keys

Exporting Keys

Command:

cargo tangle key export -t <KEY_TYPE> -p <PUBLIC> -k <KEYSTORE_PATH>

Options:

  • -t, --key-type <KEY_TYPE>: The type of key to export (required)
  • -p, --public <PUBLIC>: The public key to export (hex format without 0x prefix) (required)
  • -k, --keystore-path <KEYSTORE_PATH>: The path to the keystore (required)

Example:

cargo tangle key export -t sr25519 -p abcdef1234... -k ./keystore

This command exports the Sr25519 private key corresponding to the specified public key from the keystore.

List Keys

Command:

cargo tangle key list [OPTIONS]
cargo tangle k l [OPTIONS]

Options:

  • -k, --keystore-path <KEYSTORE_PATH>: The path to the keystore (required)

Example:

cargo tangle key list -k ./keystore

Listing Keys in a Keystore

Listing Keys in a Keystore

This command lists all keys stored in the specified keystore, displaying the key type and public key for each.

Generate Mnemonic

Generates a new mnemonic phrase that can be used to derive keys.

Command:

cargo tangle key generate-mnemonic [OPTIONS]
cargo tangle k m [OPTIONS]

Options:

  • -w, --word-count <WORD_COUNT>: Number of words in the mnemonic (12, 15, 18, 21, or 24) (optional, defaults to 12)

Example:

cargo tangle key generate-mnemonic -w 24

This command generates a 24-word mnemonic phrase.

Integration with CLI Commands

Integration with CLI Commands

The key management commands are integrated with other CLI commands that require keys:

cargo-tangle key
generate
import
export
list
generate-mnemonic
cargo-tangle blueprint
run
deploy
submit
--keystore-path
prompt_for_keys()

When deploying a blueprint or running a service, if no keystore is found at the specified path, the CLI will prompt the user to create keys for the necessary key types.

Supported Key Types

Supported Key Types

The Blueprint framework supports multiple key types for different cryptographic requirements and blockchain protocols:

Key Type Description Primarily Used For
sr25519 Schnorrkel/Ristretto x25519 Tangle Network, Substrate-based chains
ed25519 Edwards-curve Digital Signature Algorithm General-purpose signatures
ecdsa Elliptic Curve Digital Signature Algorithm Ethereum, EVM-compatible chains
bls381 Boneh-Lynn-Shacham signatures (BLS12-381 curve) Signature aggregation protocols
bls377 Boneh-Lynn-Shacham signatures (BLS12-377 curve) Signature aggregation protocols
bn254 Barreto-Naehrig curve for BLS signatures Eigenlayer and zero-knowledge proofs

Sources: 31-39 79-80

Blueprint Keystore System

Keystore Architecture

The Blueprint keystore system provides a flexible infrastructure for managing cryptographic keys with multiple storage backends.

Keystore System Architecture
Keystore
KeystoreConfig
Backend Trait
Local Storage
Remote Storage
InMemoryStorage
FileStorage
SubstrateStorage
AWS KMS
Google Cloud KMS
Ledger Hardware Wallet
RawStorage Trait
TypedStorage

Command Reference

Generate Key

Generates a new cryptographic key pair of the specified type. Command:

cargo tangle key generate [OPTIONS]
cargo tangle k g [OPTIONS]

Key Storage Backends

The Blueprint keystore system supports multiple storage backends for keys:

Blueprint Keystore Storage Backends
Storage Operations
store_raw()
KeystoreConfig
InMemoryStorage
FileStorage
SubstrateStorage
load_secret_raw()
remove_raw()
contains_raw()
list_raw()

In-Memory Storage

Configuration Example:

let config = KeystoreConfig::new().in_memory(true);
let keystore = Keystore::new(config)?;

File Storage

Configuration Example:

let config = KeystoreConfig::new().fs_root("./my-keystore");
let keystore = Keystore::new(config)?;

Substrate Storage

Configuration Example:

let keystore = sc_keystore::LocalKeystore::in_memory();
let config = KeystoreConfig::new().substrate(Arc::new(keystore));
let keystore = Keystore::new(config)?;

Best Practices for Key Management

Best Practices

  1. Separate Keystores: Use different keystores for different environments (development, testing, production).
  2. Backup Private Keys: Always keep secure backups of your private keys or mnemonic phrases.
  3. Key Rotation: Periodically generate new keys for improved security.
  4. Environment-Specific Keys: Use different keys for different networks/protocols (Tangle vs. Eigenlayer).
  5. Secure Storage: For production deployments, use secure key storage options rather than plain file storage.

Error Handling in Key Management

Error Handling

The key management commands handle various error scenarios:

Error Description
KeyTypeNotSupported The specified key type is not supported
KeyNotFound The key was not found in the keystore
InvalidKeyFormat The key format is invalid
InvalidSeed The seed provided is invalid
StorageNotSupported The storage backend is not supported
KeystoreOperationNotSupported The operation is not supported by the keystore

Example Workflow

A typical workflow for using key management commands might look like:

  1. Generate a mnemonic phrase:
cargo tangle key generate-mnemonic -w 24
  1. Generate keys for different protocols:
cargo tangle key generate -t sr25519 -o ./tangle-keystore
cargo tangle key generate -t ecdsa -o ./eigenlayer-keystore
  1. List available keys:
cargo tangle key list -k ./tangle-keystore

Deployment and Execution Commands

4. Deploy a blueprint using the generated keys:
cargo tangle blueprint deploy tangle --keystore-path ./tangle-keystore

5. Run a service with the keystore:
cargo tangle blueprint run -p tangle --keystore-path ./tangle-keystore

Documentation

Getting Started

Getting Started with Tangle Network Blueprint

Development Environment Setup

Development Environment Setup

The Tangle Blueprint project uses Nix flakes for reproducible development environments.

Prerequisites

Before setting up the development environment, you'll need:

  • Git
  • Nix package manager with flakes enabled (recommended)
  • Alternatively: Rust toolchain, Foundry, and other dependencies listed in the flake.nix file

Using Nix Flakes (Recommended)

To set up using Nix:

  1. Clone the repository
  2. Run nix develop in the project root
  3. All required tools will be available in your shell environment

The development environment is defined in the flake.nix file, which configures all required dependencies including:

  1. Rust toolchain (defined in rust-toolchain.toml)
  2. Foundry for Ethereum development
  3. Required system libraries (OpenSSL, GMP, Protobuf)
  4. Cargo tools for testing and development

Manual Setup

If you prefer not to use Nix, you can manually install the required dependencies:

  1. Install Rust using rustup
  2. Install Foundry for Ethereum development
  3. Install system libraries:
    • OpenSSL development packages
    • GMP development packages
    • Protobuf compiler
  4. Install additional Cargo tools:
    • cargo-nextest
    • cargo-expand
    • cargo-dist

Build System

The Tangle Blueprint project is organized as a Cargo workspace with multiple packages.

Building the Project

To build the entire project:

cargo build

To build a specific package:

cargo build -p <package-name>

To build with optimizations for release:

cargo build --release

Testing Framework

The Blueprint project uses a comprehensive testing framework with support for both unit and integration tests.

Testing Framework

Testing Framework Architecture

Running Tests

The project uses cargo-nextest for efficient test execution.

To run tests for the entire project:

cargo nextest run

To run tests for a specific package:

cargo nextest run -p <package-name>

For documentation tests (which aren't yet supported by nextest):

cargo test --doc

Test Profiles

The project defines custom test profiles in .config/nextest.toml:

  1. ci - Used in continuous integration, runs tests in parallel where possible.
  2. serial - Used for tests that cannot run in parallel (networking tests, etc.).

Continuous Integration

Continuous Integration

The project uses GitHub Actions for continuous integration, ensuring code quality and test coverage for all pull requests and the main branch.

CI Workflow

The CI workflow is defined in .github/workflows/ci.yml and includes the following jobs:

  1. Formatting: Checks code formatting using rustfmt
  2. Linting: Runs clippy to check for code quality issues
  3. Matrix Testing: Dynamically generates a matrix of workspace packages and runs tests for each package

Matrix Testing

The CI uses a matrix testing approach where:

  1. The workflow dynamically generates a list of all packages in the workspace
  2. Tests are run for each package individually
  3. Certain packages are identified as requiring serial test execution
  4. The appropriate nextest profile (ci or serial) is selected based on the package

This approach ensures efficient test execution while respecting the constraints of packages that cannot run tests in parallel.

Best Practices for Contributing

Development Best Practices

When contributing to the Blueprint project, follow these best practices:

  1. Ensure code passes all CI checks before merging:
    • Run cargo +nightly fmt to format code
    • Run cargo clippy to check for common issues
    • Run all tests with cargo nextest run
  2. Write comprehensive tests for new features:
    • Unit tests for individual components
    • Integration tests for inter-component interactions
    • Documentation tests for public API examples
  3. Update documentation when making significant changes:
    • Update README files as needed
    • Add documentation comments to public APIs
    • Consider updating this wiki with new information

Development Workflow

Development Process
Clone Repository
Setup Dev Environment
Create Feature Branch
Implement Feature
Run Tests Locally
Create Pull Request
CI Checks Run
Code Review
Merge to Main

Troubleshooting Common Issues

Troubleshooting Common Issues

Issue Solution
Linker errors Ensure required system libraries are installed (OpenSSL, GMP, Protobuf)
Test failures in CI but not locally Ensure tests work in isolation and don't depend on environment-specific state
Formatting errors in CI Run cargo +nightly fmt locally before pushing
Slow builds Consider enabling faster linkers like mold (Linux) or using the Nix environment
Dependencies issues Update lockfiles with cargo update or delete target directory and rebuild
Auto-refresh not enabled yet

Sources: 1-169 1-87 1-28

Advanced Topics

  • Ask Devin about tangle-network/blueprint
  • Deep Research

Documentation

Getting Started

Getting Started with Tangle Network Blueprint

Installation

Installation

You can install the Tangle CLI in two ways:

Option 1: Install Script (recommended)

Install the latest stable version of cargo-tangle using the installation script:

curl --proto '=https' --tlsv1.2 -LsSf https://github.com/tangle-network/gadget/releases/download/cargo-tangle/v0.1.1-beta.7/cargo-tangle-installer.sh | sh

Option 2: Install from source

Install the latest git version of cargo-tangle using the following command:

cargo install cargo-tangle --git https://github.com/tangle-network/gadget --force

CLI Overview

CLI Overview

cargo-tangle CLI

The cargo-tangle command-line interface is used to create, deploy, and manage blueprints on the Tangle Network. It supports various commands for blueprint management and key handling.

Command Categories

  1. Blueprint Commands:

    • Create (c)
    • Deploy (d)
    • Run (r)
    • ListRequests (ls)
    • ListBlueprints (lb)
    • Register (reg)
    • AcceptRequest (accept)
    • RejectRequest (reject)
    • RequestService (req)
    • SubmitJob (submit)
    • DeployMBSM (mbsm)
    • Generate (g)
    • Import (i)
    • Export (e)
    • List (l)
  2. Key Commands:

    • GenerateMnemonic (m)
    • generate_key()
    • import_key()
    • export_key()
    • list_keys()

CLI Command Structure

  • BlueprintCommands Enum:

    • create_new_blueprint()
    • deploy_tangle()
    • deploy_eigenlayer()
    • run_blueprint()
    • list_requests()
  • KeyCommands Enum:

    • generate_mnemonic()

Relevant Source Files

Blueprint Commands

Introduction to Blueprint Commands

Blueprint Commands

Create

Creates a new blueprint project from a template.
Usage:

cargo tangle blueprint create --name <NAME> [OPTIONS]

Aliases: cargo tangle bp c
Options:

  • --name, -n <NAME>: The name of the blueprint (required)
  • --source: Specify the source template (optional)
  • --blueprint-type: Specify the type of blueprint to create (optional)

Example:

cargo tangle blueprint create --name my_blueprint

Deploy

Deploys a blueprint to either the Tangle Network or Eigenlayer.
Usage:

cargo tangle blueprint deploy <TARGET> [OPTIONS]

Deploying Blueprints

Blueprint Commands

Aliases: cargo tangle bp d

Targets:

  1. Tangle

    cargo tangle blueprint deploy tangle [OPTIONS]
    

    Options:

    • --http-rpc-url <URL>: HTTP RPC URL (default: https://rpc.tangle.tools)
    • --ws-rpc-url <URL>: WebSocket RPC URL (default: wss://rpc.tangle.tools)
    • --package, -p <PACKAGE>: The package to deploy
    • --devnet: Start a local devnet using a Tangle test node
    • --keystore-path, -k <PATH>: The keystore path (defaults to ./keystore)
  2. Eigenlayer

    cargo tangle blueprint deploy eigenlayer [OPTIONS]
    

    Options:

    • --ws-rpc-url <URL>: The RPC endpoint to connect to (default: ws://127.0.0.1:9944)
    • --service-id <ID>: The service ID to submit the job to
    • --blueprint-id <ID>: The blueprint ID to submit the job to
    • --keystore-uri <URI>: The keystore URI to use
    • --job <JOB>: The job ID to submit
    • --params-file <FILE>: Optional path to a JSON file containing job parameters
    • --watcher: Whether to wait for the job to complete

    Example:

    cargo tangle blueprint submit --blueprint-id 42 --job 1 --keystore-uri ./keystore
    

Deploy MBSM

Deploys a Master Blueprint Service Manager (MBSM) contract to the Tangle Network.

Usage:

cargo tangle blueprint deploy-mbsm [OPTIONS]

Running Blueprints

Deploy Command

Options:

  • --rpc-url <URL>: HTTP RPC URL
  • --contracts-path <PATH>: Path to the contracts
  • --ordered-deployment: Deploy contracts in an interactive ordered manner
  • --network, -w <NETWORK>: Network to deploy to (local, testnet, mainnet; default: local)
  • --devnet: Start a local devnet using Anvil (only valid with network=local)
  • --keystore-path, -k <PATH>: The keystore path (defaults to ./keystore)

Example:

cargo tangle blueprint deploy tangle --devnet --package my_blueprint

Run Command

Runs a blueprint gadget, connecting to a specified protocol and network. Usage:

cargo tangle blueprint run --protocol <PROTOCOL> [OPTIONS]

Aliases: cargo tangle bp r

Options:

  • --protocol, -p <PROTOCOL>: The protocol to run (eigenlayer or tangle)
  • --rpc-url, -u <URL>: The HTTP RPC endpoint URL (default: http://127.0.0.1:9944)
  • --keystore-path, -k <PATH>: The keystore path (defaults to ./keystore)
  • --binary-path, -b <PATH>: The path to the binary
  • --network, -w <NETWORK>: The network to connect to (local, testnet, mainnet; default: local)
  • --data-dir, -d <PATH>: The data directory path (defaults to ./data)
  • --bootnodes, -n <BOOTNODES>: Optional bootnodes to connect to
  • --settings-file, -f <FILE>: Path to the protocol settings env file (default: ./settings.env)
  • --podman-host, -p <URL>: The Podman host to use for containerized blueprints

Example:

cargo tangle blueprint run --protocol tangle --rpc-url http://127.0.0.1:9944

Managing Tangle Blueprints

Create a new blueprint

cargo tangle blueprint create --name my_blueprint

Navigate to the blueprint directory

cd my_blueprint

Build the blueprint

cargo build

Deploy to a local Tangle devnet

cargo tangle blueprint deploy tangle --devnet

List the deployed blueprints

cargo tangle blueprint list-blueprints

Register as an operator for blueprint #0

cargo tangle blueprint register --blueprint-id 0

Request a service for blueprint #0

cargo tangle blueprint request-service --blueprint-id 0 --value 1000000

List service requests

cargo tangle blueprint list-requests

Accept service request #0

cargo tangle blueprint accept-request --request-id 0

Run the blueprint for the service

cargo tangle blueprint run --protocol tangle

Submit a job to the service

cargo tangle blueprint submit --blueprint-id 0 --service-id 0 --job 1

Service Requests and Responses

List Requests

Lists service requests for a Tangle blueprint. Usage:

cargo tangle blueprint list-requests [OPTIONS]

Aliases: cargo tangle bp ls
Options:

  • --ws-rpc-url <URL>: WebSocket RPC URL to use (default: ws://127.0.0.1:9944)
    Example:
cargo tangle blueprint list-requests --ws-rpc-url wss://rpc.tangle.tools

List Blueprints

Lists blueprints on the target Tangle network. Usage:

cargo tangle blueprint list-blueprints [OPTIONS]

Aliases: cargo tangle bp lb
Options:

  • --ws-rpc-url <URL>: WebSocket RPC URL to use (default: ws://127.0.0.1:9944)

Reject Request

Rejects a service request. Usage:

cargo tangle blueprint reject-request [OPTIONS]

Aliases: cargo tangle bp reject
Options:

  • --ws-rpc-url <URL>: WebSocket RPC URL to use (default: ws://127.0.0.1:9944)
  • --keystore-uri <URI>: The keystore URI to use (default: ./keystore)
  • --request-id <ID>: The request ID to respond to
    Example:
cargo tangle blueprint reject-request --request-id 123

Request Service

Requests a Tangle service. Usage:

cargo tangle blueprint request-service [OPTIONS]

Aliases: cargo tangle bp req
Options:

  • --ws-rpc-url <URL>: WebSocket RPC URL to use (default: ws://127.0.0.1:9944)
  • --blueprint-id <ID>: The blueprint ID to request
  • --min-exposure-percent <PERCENT>: The minimum exposure percentage (default: 50)
  • --max-exposure-percent <PERCENT>: The maximum exposure percentage (default: 80)
  • --target-operators <OPERATORS>: The target operators to request
  • --value <VALUE>: The value to request
  • --keystore-uri <URI>: The keystore URI to use (default: ./keystore)
    Example:
cargo tangle blueprint request-service --blueprint-id 42 --value 1000000

Submit Job

Submits a job to a service. Usage:

cargo tangle blueprint submit [OPTIONS]

Protocol Settings for Blueprints

Protocol Settings Format

When running a blueprint with the --settings-file option, the file should be in the .env format and contain protocol-specific settings:

Tangle Settings Example

BLUEPRINT_ID=42
SERVICE_ID=123

Eigenlayer Settings Example

ALLOCATION_MANAGER_ADDRESS=0x1234...
REGISTRY_COORDINATOR_ADDRESS=0x5678...
OPERATOR_STATE_RETRIEVER_ADDRESS=0x9abc...
DELEGATION_MANAGER_ADDRESS=0xdef0...
SERVICE_MANAGER_ADDRESS=0x1122...
STAKE_REGISTRY_ADDRESS=0x3344...
STRATEGY_MANAGER_ADDRESS=0x5566...
STRATEGY_ADDRESS=0x7788...
AVS_DIRECTORY_ADDRESS=0x99aa...
REWARDS_COORDINATOR_ADDRESS=0xbbcc...
PERMISSION_CONTROLLER_ADDRESS=0xddee...

Examples

Complete Blueprint Deployment Workflow

Examples of Command Usage

Register

Registers for a Tangle blueprint.
Usage:

cargo tangle blueprint register [OPTIONS]

Aliases: cargo tangle bp reg
Options:

  • --ws-rpc-url <URL>: WebSocket RPC URL to use (default: ws://127.0.0.1:9944)
  • --blueprint-id <ID>: The blueprint ID to register
  • --keystore-uri <URI>: The keystore URI to use (default: ./keystore)

Example:

cargo tangle blueprint register --blueprint-id 42 --ws-rpc-url wss://rpc.tangle.tools

Accept Request

Accepts a Tangle service request.
Usage:

cargo tangle blueprint accept-request [OPTIONS]

Key Management Commands

Introduction to Key Management Commands

Aliases: cargo tangle bp mbsm
Options:

  • --http-rpc-url <URL>: The HTTP RPC URL to use (default: http://127.0.0.1:9944)
  • --force, -f: Force deployment even if the contract is already deployed

Example:

cargo tangle blueprint deploy-mbsm --http-rpc-url https://rpc.tangle.tools

Key Management Commands

Key management commands handle cryptographic key operations such as generation, import, export, and listing.

Generate

Generates a new cryptographic key.
Usage:

cargo tangle key generate [OPTIONS]

Generating Cryptographic Keys

Generating Cryptographic Keys

Aliases: cargo tangle k g

Options:

  • --key-type, -t <TYPE>: The type of key to generate (sr25519, ed25519, ecdsa, bls381, bls377, bn254)
  • --output, -o <PATH>: The path to save the key to
  • --seed <SEED>: The seed to use for key generation (hex format without 0x prefix)
  • --show-secret, -v: Show the secret key in output

Example:

cargo tangle key generate --key-type sr25519 --output ./my-keystore --show-secret

Import

Imports a key into the keystore.

Usage:

cargo tangle key import [OPTIONS]

Additional Commands

  • Generate a new keystore with multiple key types:
mkdir -p ./keystore
cargo tangle key generate --key-type sr25519 --output ./keystore
cargo tangle key generate --key-type ecdsa --output ./keystore
  • List all keys in the keystore:
cargo tangle key list --keystore-path ./keystore
  • Generate a mnemonic for backup:
cargo tangle key generate-mnemonic --word-count 24
  • Export a key:
cargo tangle key export --key-type sr25519 --public <PUBLIC_KEY> --keystore-path ./keystore

Importing Keys into the Keystore

Aliases: cargo tangle k i
Options:

  • --key-type, -t <TYPE>: Type of key to import (sr25519, ed25519, ecdsa, bls381, bls377, bn254)
  • --secret, -x <SECRET>: Secret key to import (hex format without 0x prefix)
  • --keystore-path, -k <PATH>: Path to the keystore
  • --protocol, -p <PROTOCOL>: Protocol for generating keys (Eigenlayer or Tangle; default: tangle)

Example:

cargo tangle key import --key-type sr25519 --secret abcdef1234567890 --keystore-path ./keystore

Export:
Exports a key from the keystore.
Usage:

cargo tangle key export [OPTIONS]

Exporting Keys from the Keystore

Exporting Keys from the Keystore

Aliases: cargo tangle k e
Options:

  • --key-type, -t <TYPE>: The type of key to export (sr25519, ed25519, ecdsa, bls381, bls377, bn254)
  • --public, -p <PUBLIC>: The public key to export (hex format without 0x prefix)
  • --keystore-path, -k <PATH>: The path to the keystore

Example:

cargo tangle key export --key-type sr25519 --public 0123456789abcdef --keystore-path ./keystore

List Keys in the Keystore

Aliases: cargo tangle k l
Options:

  • --keystore-path, -k <PATH>: The path to the keystore

Example:

cargo tangle key list --keystore-path ./keystore

Creating a Mnemonic Phrase

Generate Mnemonic

Generates a new mnemonic phrase.

Usage:

cargo tangle key generate-mnemonic [OPTIONS]

Aliases: cargo tangle k m

Options:

  • --word-count, -w <COUNT>: Number of words in the mnemonic (12, 15, 18, 21, or 24)

Example:

cargo tangle key generate-mnemonic --word-count 24

Service Request Management

Accept Request

Aliases: cargo tangle bp accept
Options:

  • --ws-rpc-url <URL>: WebSocket RPC URL to use (default: ws://127.0.0.1:9944)
  • --min-exposure-percent <PERCENT>: The minimum exposure percentage (default: 50)
  • --max-exposure-percent <PERCENT>: The maximum exposure percentage (default: 80)
  • --keystore-uri <URI>: The keystore URI to use (default: ./keystore)
  • --restaking-percent <PERCENT>: The restaking percentage (default: 50)
  • --request-id <ID>: The request ID to respond to

Example:

cargo tangle blueprint accept-request --request-id 123 --min-exposure-percent 60

Reject Request

Usage:

cargo tangle blueprint reject-request [OPTIONS]

Environment Variables

Environment Variables

Several CLI commands support configuration through environment variables:

Environment Variable Used By Description
WS_RPC_URL List commands, Register, Request/Accept/Reject WebSocket RPC URL (default: ws://127.0.0.1:9944)
HTTP_RPC_URL Deploy MBSM HTTP RPC URL (default: http://127.0.0.1:9944)
KEYSTORE_URI Register, Accept/Reject, Request Keystore URI (default: ./keystore)
SIGNER Deploy SURI of the signer account (optional)
EVM_SIGNER Deploy SURI of the EVM signer account (optional)
PODMAN_HOST Run Podman host to use for containerized blueprints

Sources: 95-108 197-198

Consultation and Further Research

  • Ask Devin about tangle-network/blueprint
  • Conduct deep research

Documentation

Introduction to Tangle Network's Blueprint Framework

No substantial content available. Please refer to Devin for detailed information on the Tangle Network's Blueprint Framework.

Architecture Overview

Key Sections of Tangle Network Blueprint Documentation

Testing Framework Overview

Introduction to the Testing Framework

Testing Framework Overview

  • Core Testing Components: Understand the essential parts of the testing framework.
  • TestEnv Trait: Defines the environment for running tests.
  • TestRunner: The main component responsible for executing tests.
  • Protocol-Specific Test Environments: Tailored environments for different protocols.
  • TangleTestEnv: A specific test environment for Tangle.
  • EigenlayerBLSTestEnv: Test environment for Eigenlayer BLST.
  • Tangle Test Harness: Framework for managing and running tests.
  • TangleTestHarness API: API documentation for interacting with the test harness.

Example Test Workflows

  • Single-Node Test:

    def test_single_node():
        # Setup code
        ...
        # Test execution
        ...
  • Multi-Node Test:

    def test_multi_node():
        # Setup multiple nodes
        ...
        # Test execution across nodes
        ...

CI Integration

  • Integrate testing framework with Continuous Integration systems to automate testing.

Conclusion

  • The testing framework provides a comprehensive suite for testing applications, ensuring reliability and performance.

Architecture of the Testing Framework

This document details the comprehensive testing framework for the Tangle Blueprint project. The framework provides utilities for testing blueprints across various blockchain environments with a focus on Tangle Network and Eigenlayer. It enables developers to validate blueprint implementations through controlled test environments that simulate real-world blockchain interactions.

Testing Architecture Overview

The Blueprint testing framework follows a modular design pattern, allowing developers to test their blueprints across different blockchain environments with a consistent API.

Core Components

Testing Framework Overview

Core Testing Components

The testing framework is built around several core components that provide a consistent way to test blueprints across different blockchain environments:

  • TestEnv trait
  • TestRunner
  • TangleTestEnv
  • EigenlayerBLSTestEnv
  • TangleTestHarness
  • MultiNodeTestEnv
  • NodeHandle

Relevant Source Files

TestRunner Class Overview

TestRunner

The TestRunner is responsible for executing jobs in a controlled environment. It takes a router with configured jobs and executes them with the provided context.

pub struct TestRunner<Ctx> {
    router: Option<Router<Ctx>>,
    job_index: usize,
    pub builder: Option<BlueprintRunnerBuilder<Pending<()>>>,
    _phantom: core::marker::PhantomData<Ctx>,
}

The TestRunner provides methods to:

  • Add jobs to be executed
  • Add background services
  • Run the jobs with a given context

NodeHandle Structure in MultiNodeTestEnv

NodeHandle

Each node in the MultiNodeTestEnv is represented by a NodeHandle that provides control over individual nodes:

pub struct NodeHandle<Ctx> {
    pub node_id: usize,
    pub addr: Multiaddr,
    pub port: u16,
    pub client: TangleClient,
    pub signer: TanglePairSigner<sp_core::sr25519::Pair>,
    state: Arc<RwLock<NodeState>>,
    command_tx: mpsc::Sender<NodeCommand>,
    pub test_env: Arc<RwLock<TangleTestEnv<Ctx>>>,
}

Testing Transactions

The testing framework provides utilities for executing common transactions on the Tangle network:

// Transaction Functions
deploy_new_mbsm_revision()
create_blueprint()
join_operators()
register_for_blueprint()
submit_job()
request_service()
approve_service()
wait_for_completion_of_tangle_job()

Sources: 529-690

Testing Workflows and Best Practices

Example Test Workflow

A typical test workflow using the TangleTestHarness involves:

  1. Setting up the test environment
  2. Deploying a blueprint
  3. Setting up operators and services
  4. Submitting a job
  5. Waiting for job execution
  6. Verifying the results
// Initialize the test harness
let harness = TangleTestHarness::<MyContext>::setup(temp_dir).await?;

// Deploy the blueprint
let blueprint_id = harness.deploy_blueprint().await?;

// Setup services with operators
let (mut test_env, service_id, _) = harness.setup_services::<1>(false).await?;

// Initialize the test environment
test_env.initialize().await?;

// Add a job to be executed
test_env.add_job(my_job).await;

// Start the test environment with context
test_env.start(my_context).await?;

// Submit a job to the service
let job = harness.submit_job(service_id, job_id, inputs).await?;

// Wait for job execution
let results = harness.wait_for_job_execution(service_id, job).await?;

// Verify the job results

Continuous Integration and CI/CD Processes

CI Integration

The testing framework is designed for CI environments. The CI configuration in .github/workflows/ci.yml defines the test execution process:

  1. Checks code formatting
  2. Runs linting checks
  3. Generates a matrix of crates to test
  4. Runs tests for each crate

Special handling for some crates in the CI configuration:

SERIAL_CRATES=("blueprint-tangle-testing-utils" "blueprint-client-evm" "blueprint-tangle-extra" "blueprint-networking" "cargo-tangle")

Transaction Functions

These functions facilitate:

  • Deploying the Master Blueprint Service Manager (MBSM)
  • Creating new blueprints
  • Joining the operator set
  • Registering for blueprints
  • Submitting jobs to services
  • Requesting and approving services
  • Waiting for job completion

The TestEnv Trait

TestEnv Trait

The TestEnv trait defines the common interface that all test environments must implement, enabling consistent testing patterns regardless of the underlying blockchain protocol.

«trait»
TestEnv
+type Config
+type Context
+new(config, env) : Result<Self, Error>
+add_job(job)
+add_background_service(service)
+get_gadget_config() : BlueprintEnvironment
+run_runner(context) Future<Result>() : , Error<>

TangleTestEnv

+runner: Option<TestRunner<Ctx>>
+config: TangleConfig
+env: BlueprintEnvironment
+runner_handle: Mutex<Option<JoinHandle>>
+update_networking_config(bootnodes, port)
+set_tangle_producer_consumer()

EigenlayerBLSTestEnv

+runner: Option<TestRunner<Ctx>>
+config: EigenlayerBLSConfig
+env: BlueprintEnvironment
+runner_handle: Mutex<Option<JoinHandle>>

TangleTestEnv: Structure and Functionality

TangleTestEnv

TangleTestEnv is the test environment for Tangle Network blueprints. It implements the TestEnv trait and provides Tangle-specific functionality.

pub struct TangleTestEnv<Ctx> {
    pub runner: Option<TestRunner<Ctx>>,
    pub config: TangleConfig,
    pub env: BlueprintEnvironment,
    pub runner_handle: Mutex<Option<JoinHandle<Result<(), Error>>>>,
}

The TangleTestEnv includes methods to:

  • Update networking configuration
  • Set up Tangle producer and consumer
  • Run the test runner with a given context

EigenlayerBLSTestEnv: Implementation Details

EigenlayerBLSTestEnv

EigenlayerBLSTestEnv is the test environment for Eigenlayer BLS blueprints. It implements the TestEnv trait and provides Eigenlayer-specific functionality.

pub struct EigenlayerBLSTestEnv<Ctx> {
    pub runner: Option<TestRunner<Ctx>>,
    pub config: EigenlayerBLSConfig,
    pub env: BlueprintEnvironment,
    pub runner_handle: Mutex<Option<JoinHandle<Result<(), Error>>>>,
}

Tangle Test Harness

The TangleTestHarness provides a comprehensive set of utilities for testing Tangle Network blueprints. It handles the setup of test nodes, deployment of blueprints, registration of operators, creation of services, and execution of jobs.

Infrastructure
TangleTestHarness
Test Flow
setup()
deploy_blueprint()
setup_services()
submit_job()
wait_for_job_execution()
verify_job()
Tangle Node
Master Blueprint Service Manager

TangleTestHarness API

TangleTestHarness API

The TangleTestHarness provides a rich API for testing Tangle Network blueprints:

Method Description
setup Sets up a test environment with a local Tangle node
deploy_blueprint Deploys a blueprint to the Tangle network
setup_services Sets up operators and services for testing
submit_job Submits a job to be executed by the service
wait_for_job_execution Waits for job execution to complete
verify_job Verifies that job results match expected outputs

Sources: TangleTestHarness Source Code

MultiNode Testing Strategies

Multi-Node Testing

The MultiNodeTestEnv allows testing with multiple operators, enabling complex scenarios such as consensus protocols and distributed systems.

Command Flow

NodeHandle
add_job()
add_background_service()
start_runner()
shutdown()

MultiNodeTestEnv
initialize()
add_job()
start()
start_with_contexts()
add_node()
remove_node()
shutdown()

EnvironmentCommand
NodeCommand

MultiNodeTestEnv API

The MultiNodeTestEnv provides methods for managing multiple test nodes:

Method Description
new Creates a new multi-node test environment
initialize Initializes the environment with the specified number of nodes
add_job Adds a job to all nodes
start Starts all nodes with the same context
start_with_contexts Starts nodes with different contexts
add_node Adds a new node to the environment
remove_node Removes a node from the environment
shutdown Shuts down the environment

Example: Multi-Node Test

#[tokio::test]
async fn test_multi_node() -> Result<(), Error> {
    // Set up the test harness
    let temp_dir = TempDir::new()?;
    let harness = TangleTestHarness::<MyContext>::setup(temp_dir).await?;

    // Deploy the blueprint
    let blueprint_id = harness.deploy_blueprint().await?;

    // Set up services with multiple nodes
    let (mut test_env, service_id, _) = harness.setup_services::<3>(false).await?;
    test_env.initialize().await?;

    // Add a job to test
    test_env.add_job(my_test_job).await;

    // Start the test environment
    test_env.start(my_context).await?;

    // Submit a job
    let job = harness.submit_job(service_id, 0, inputs).await?;

    // Wait for execution and verify results
    let results = harness.wait_for_job_execution(service_id, job).await?;
    harness.verify_job(&results, expected_outputs);

    Ok(())
}

Managing Jobs in Multi-Node Environments

// Add a job to all nodes
test_env.add_job(my_test_job).await;

// Create node-specific contexts
let node_handles = test_env.node_handles().await;
let contexts = node_handles.iter().enumerate().map(|(idx, _)| {
    MyContext { node_index: idx }
}).collect::<Vec<_>>();

// Start the test environment with node-specific contexts
test_env.start_with_contexts(contexts).await?;

// Submit a job
let job = harness.submit_job(service_id, 0, inputs).await?;

// Wait for execution and verify results
let results = harness.wait_for_job_execution(service_id, job).await?;
harness.verify_job(&results, expected_outputs);

Ok(())

Sources: 529-557 184-278

Writing Effective Tests with the Blueprint Framework

Writing Tests with the Framework

  1. Set up the test environment using TangleTestHarness or the appropriate test environment for your protocol.
  2. Deploy your blueprint using deploy_blueprint().
  3. Set up operators and services using setup_services().
  4. Define and add jobs to be tested.
  5. Run your test environment.
  6. Submit jobs and verify results.

Example: Single-Node Test

#[tokio::test]
async fn test_my_blueprint() -> Result<(), Error> {
    // Set up the test harness
    let temp_dir = TempDir::new()?;
    let harness = TangleTestHarness::<MyContext>::setup(temp_dir).await?;

    // Deploy the blueprint
    let blueprint_id = harness.deploy_blueprint().await?;
}

Conclusion and Further Research

Conclusion

The Blueprint testing framework provides a comprehensive set of tools for testing blueprints across different blockchain environments. It simplifies the complexities of setting up test nodes, deploying blueprints, and interacting with blockchain networks, allowing developers to focus on writing tests for their blueprint functionality. Key components include:

  • TangleTestHarness
  • MultiNodeTestEnv
  • Protocol-specific test environments

These tools enable developers to create robust test suites that validate their blueprint implementations in various scenarios.

Documentation

Introduction to Tangle Network's Blueprint Framework

Tangle Network's Blueprint Framework Overview

Getting Started

Blueprint SDK

Blueprint Runner

Blueprint Manager

CLI Reference

Development

Advanced Topics

Getting Started with Installation

Installation Instructions

Refer to the following sources for detailed installation instructions:

Key Points

  • Ensure to follow the instructions in the linked documents for a successful installation.
  • Look for code snippets and examples in the provided lines for practical guidance.

Core Components of Tangle Blueprints

Blueprint Sources and Templates

When creating a new blueprint for deployment, the system supports different template sources:

Blueprint Types        Template Sources
cargo tangle blueprint create
Blueprint Templates
Blueprint Types
Default Templates
GitHub Repository
Custom Templates
Tangle Blueprint
Eigenlayer BLS
Eigenlayer ECDSA

Deployment Options Overview

Deployment Options Overview

The cargo-tangle CLI provides various deployment targets for blueprints, managed through the blueprint deploy subcommand. The two primary deployment targets are:

Deployment Targets

  • Tangle Network Deployment

    • Command:
      cargo tangle blueprint deploy tangle --http-rpc-url <URL> --ws-rpc-url <WS_URL> [OPTIONS]
      
  • Eigenlayer Deployment

    • Command:
      cargo tangle blueprint deploy eigenlayer --rpc-url <URL> [OPTIONS]
      

Eigenlayer Options

Option Description Default
--rpc-url HTTP RPC endpoint URL Required (no default)
--contracts-path Path to contracts directory None (auto-detected)
--ordered-deployment Deploy contracts in interactive ordered manner false
--network Target network (local/testnet/mainnet) local
--devnet Start a local Anvil instance false
--keystore-path Path to the keystore containing deployment keys ./keystore

Sources

Using the cargo-tangle CLI for Deployment

Key Options for cargo-tangle CLI

Option Description Default
--http-rpc-url HTTP RPC endpoint URL https://rpc.tangle.tools
--ws-rpc-url WebSocket RPC endpoint URL wss://rpc.tangle.tools
--package Package to deploy (if workspace has multiple) None (auto-detected)
--devnet Start a local Tangle devnet false
--keystore-path Path to the keystore containing deployment keys ./keystore

Local Development with Devnet

To start a local Tangle testnet and deploy your blueprint:

cargo tangle blueprint deploy tangle --devnet

This command creates a keystore with test accounts and deploys your blueprint to the local network.

Additional Options

  • --data-dir: Directory for data storage
  • --bootnodes: Optional bootnodes to connect to
  • --settings-file: Path to the protocol settings file

Deployment Command Relationship

Deploy Functions
protocol=tangle
protocol=eigenlayer
cargo-tangle CLI
blueprint deploy
deploy tangle
deploy eigenlayer
blueprint run
key management
deploy_tangle()   
Function
deploy_eigenlayer()   
Function
run_blueprint()
run_eigenlayer_avs()

Key Management in the Blueprint Framework

Deployment Workflow

The following diagram illustrates the typical deployment workflow for both Tangle and Eigenlayer:

Key Management
Tangle Network
Eigenlayer
Start Deployment
Build Project
Setup Keys
Choose Deployment Target
Deploy to Tangle
Deploy to Eigenlayer
Blueprint Deployed to Tangle
Blueprint Deployed to Eigenlayer
Generate Keys
Import Existing Keys
Store in Keystore

Key Management for Deployment

Proper key management is essential for blueprint deployment. The cargo-tangle CLI provides several commands for managing keys:

Key Types and Commands

The Blueprint framework supports multiple key types for different protocols:

Key Type | Description | Use Case  
---|---|---  
Sr25519 | Substrate key type | Tangle Network operations  
Ed25519 | Edwards-curve Digital Signature Algorithm | General cryptographic operations  
Ecdsa | Elliptic Curve Digital Signature Algorithm | Tangle and Ethereum operations  
Bls381 | BLS signatures on BLS12-381 curve | Aggregate signatures  
Bls377 | BLS signatures on BLS12-377 curve | Aggregate signatures  
Bn254 | BLS signatures on BN254 curve | Eigenlayer operations  

Key management commands include:

cargo tangle key generate --key-type <TYPE> [OPTIONS]
cargo tangle key import --key-type <TYPE> --secret <SECRET_KEY> --keystore-path <PATH>
cargo tangle key export --key-type <TYPE> --public <PUBLIC_KEY> --keystore-path <PATH>
cargo tangle key list --keystore-path <PATH>

Keystore Architecture

The Blueprint keystore system manages cryptographic keys securely across different backends:

Storage Backends
generate/import/  
export/list
supported by
Key Types
Sr25519
Ed25519
Ecdsa
BLS (Bn254, Bls381, Bls377)
cargo-tangle CLI
Key Commands
Key Types
Blueprint Keystore
Storage Backends
File System Storage
In-Memory Storage

Environment Configuration

Environment variables or configuration files can be used to define protocol-specific settings for deployment:

Tangle Configuration

For Tangle deployments, create a settings.env file with:

BLUEPRINT_ID=<your_blueprint_id>
SERVICE_ID=<optional_service_id>

Configuring Environment Settings

Eigenlayer Configuration

For Eigenlayer deployments, create a settings.env file with contract addresses:

ALLOCATION_MANAGER_ADDRESS=0x...
REGISTRY_COORDINATOR_ADDRESS=0x...
OPERATOR_STATE_RETRIEVER_ADDRESS=0x...
DELEGATION_MANAGER_ADDRESS=0x...
SERVICE_MANAGER_ADDRESS=0x...
STAKE_REGISTRY_ADDRESS=0x...
STRATEGY_MANAGER_ADDRESS=0x...
STRATEGY_ADDRESS=0x...
AVS_DIRECTORY_ADDRESS=0x...
REWARDS_COORDINATOR_ADDRESS=0x...
PERMISSION_CONTROLLER_ADDRESS=0x...

Running Deployed Blueprints

After deployment, you can run your blueprint on the target network:

cargo tangle blueprint run --protocol <PROTOCOL> --rpc-url <URL> [OPTIONS]

Where:

  • <PROTOCOL> is either tangle or eigenlayer
  • <URL> is the RPC endpoint URL

Best Practices for Deployment

Best Practices for Deployment

  1. Key Management:

    • Generate and securely store keys before deployment.
    • Use different keys for development and production environments.
    • Regularly back up your keystore.
  2. Local Testing:

    • Always test deployments with the --devnet flag before deploying to production networks.
    • Use the local network option for Eigenlayer testing.
  3. Network Selection:

    • Use testnet deployments before moving to mainnet.
    • Configure RPC endpoints appropriately for each network.
  4. Configuration Management:

    • Store network-specific configurations in separate environment files.
    • Include protocol-specific settings in version control with example values.
  5. Continuous Deployment:

    • Implement CI/CD pipelines for automated testing and deployment.
    • Include deployment verification steps in your workflow.

Troubleshooting Deployment Issues

Troubleshooting

Common deployment issues and their solutions:

Issue Possible Cause Solution
Key not found Missing or incorrect keystore path Check keystore path and generate/import required keys
Connection error Incorrect RPC URL Verify URL and network connectivity
Contract not found (Eigenlayer) Missing or incorrect contract addresses Set correct addresses in settings.env file
Build failure Missing dependencies Run cargo build manually to see detailed errors
Unknown service ID (Tangle) Service not properly registered Use cargo tangle blueprint register to register the service

If you encounter issues during deployment, use the verbose output option or check the logs for more detailed error information.

Advanced Topics and Further Research

  • Ask Devin about tangle-network/blueprint
  • Deep Research

Documentation

Installation and Setup

Key Links and Sections

CLI Command Reference

Introduction to Blueprint CLI Commands

Blueprint CLI Commands Overview

The blueprint commands are part of the cargo-tangle CLI, allowing management of blueprints across various protocol environments, including Tangle Network and Eigenlayer.

Command Format

To use the Blueprint commands, follow this structure:

cargo tangle blueprint <COMMAND> [OPTIONS]

or the short alias:

cargo tangle bp <COMMAND> [OPTIONS]

Relevant Source Files

For key management commands, refer to Key Management Commands.

Command Syntax Overview

Command Options

  • --protocol, -p <PROTOCOL>: Protocol to run (eigenlayer or tangle)
  • --rpc-url, -u <URL>: HTTP RPC endpoint URL (default: http://127.0.0.1:9944)
  • --keystore-path, -k <PATH>: Keystore path (defaults to ./keystore)
  • --binary-path, -b <PATH>: Path to the AVS binary (built if not provided)
  • --network, -w <NETWORK>: Network to connect to (local, testnet, mainnet, default: local)
  • --data-dir, -d <PATH>: Data directory path (defaults to ./data)
  • --bootnodes, -n <BOOTNODES>: Optional bootnodes to connect to
  • --settings-file, -f <FILE>: Path to protocol settings env file (default: ./settings.env)
  • --podman-host <URL>: Podman host for containerized blueprints

Related Links

Creating Blueprints

Blueprint Command Structure

  • cargo tangle
  • Commands:
    • blueprint (bp)
    • create (c)
    • deploy (d)
    • run (r)
    • list-requests (ls)
    • list-blueprints (lb)
    • register (reg)
    • accept-request (accept)
    • reject-request (reject)
    • request-service (req)
    • submit-job (submit)
    • deploy-mbsm (mbsm)

Blueprint Lifecycle Commands

These commands help you create, deploy, and run blueprints on different networks.

Create

Creates a new blueprint project with the specified name and type.

cargo tangle blueprint create --name <NAME> [--source <SOURCE>] [--blueprint-type <TYPE>]

Options:

  • --name, -n <NAME>: Name of the blueprint (required)
  • --source <SOURCE>: Optional template source (defaults to official template)
  • --blueprint-type <TYPE>: Type of blueprint to create (Tangle, EigenlayerBLS, EigenlayerECDSA)

Sources: 137-331

Running Blueprints

Run

Runs a blueprint on the specified protocol.

cargo tangle blueprint run [OPTIONS]

Examples

  1. Run Blueprint (as operator):
cargo tangle blueprint run --protocol tangle
  1. Submit Job (as user):
cargo tangle blueprint submit --blueprint-id 0 --service-id 0 --job 1

Service Request Commands

Accept Request

Accepts a service request.

cargo tangle blueprint accept-request [OPTIONS]

Options:

  • --ws-rpc-url <URL>: WebSocket RPC URL (default: ws://127.0.0.1:9944)
  • --blueprint-id <ID>: Blueprint ID to request
  • --min-exposure-percent <PERCENT>: Minimum exposure percentage (default: 50)
  • --max-exposure-percent <PERCENT>: Maximum exposure percentage (default: 80)
  • --target-operators <OPERATORS>: Target operators to request
  • --value <VALUE>: Value to request
  • --keystore-uri <URI>: Keystore URI to use (default: ./keystore)

Sources: 268-286

Listing Blueprints and Requests

List Blueprints

Lists all blueprints on the target Tangle network.

cargo tangle blueprint list-blueprints [--ws-rpc-url <URL>]

Options:

  • --ws-rpc-url <URL>: WebSocket RPC URL (default: ws://127.0.0.1:9944)

List Requests

Lists all service requests for a Tangle blueprint.

cargo tangle blueprint list-requests [--ws-rpc-url <URL>]

Options:

  • --ws-rpc-url <URL>: WebSocket RPC URL (default: ws://127.0.0.1:9944)

Deploy MBSM

Deploys a Master Blueprint Service Manager (MBSM) contract to the Tangle Network.

cargo tangle blueprint deploy-mbsm [OPTIONS]

Options:

  • --http-rpc-url <URL>: HTTP RPC URL (default: http://127.0.0.1:9944)
  • --force, -f: Force deployment even if the contract is already deployed

Deploying Blueprints

Deploy

Deploys a blueprint to the target network (Tangle or Eigenlayer).

cargo tangle blueprint deploy <TARGET> [OPTIONS]

Tangle Deployment

cargo tangle blueprint deploy tangle [OPTIONS]

Options:

  • --http-rpc-url <URL>: HTTP RPC URL (default: https://rpc.tangle.tools)
  • --ws-rpc-url <URL>: WebSocket RPC URL (default: wss://rpc.tangle.tools)
  • --package, -p <PACKAGE>: Package to deploy (if workspace has multiple packages)
  • --devnet: Start a local devnet using a Tangle test node
  • --keystore-path, -k <PATH>: Path to the keystore (default: ./keystore)

Eigenlayer Deployment

cargo tangle blueprint deploy eigenlayer [OPTIONS]

Options:

  • --rpc-url <URL>: RPC URL for Eigenlayer deployment
  • --contracts-path <PATH>: Path to the contracts
  • --ordered-deployment: Deploy contracts in an interactive ordered manner
  • --network, -w <NETWORK>: Network to deploy to (local, testnet, mainnet, default: local)
  • --devnet: Start a local devnet using Anvil (only valid with network=local)
  • --keystore-path, -k <PATH>: Path to the keystore (default: ./keystore)

Service Management Commands

Service Management Commands

These commands are used to manage blueprint services on the Tangle Network.

Blueprint Service Lifecycle

  • Deploy Blueprint
  • Register as Operator
  • Request Service
  • Accept Request
  • Reject Request
  • Submit Job

Register

Registers an account as an operator for a blueprint.

cargo tangle blueprint register --ws-rpc-url <URL> --blueprint-id <ID> --keystore-uri <URI>

Options:

  • --ws-rpc-url <URL>: WebSocket RPC URL (default: ws://127.0.0.1:9944)
  • --blueprint-id <ID>: Blueprint ID to register for
  • --keystore-uri <URI>: Keystore URI to use (default: ./keystore)

Request Service

Requests a blueprint service from operators.

cargo tangle blueprint request-service [OPTIONS]

Job Management in Blueprint

Job Management Commands

Reject Request

Rejects a service request.

cargo tangle blueprint reject-request [OPTIONS]

Options:

  • --ws-rpc-url <URL>: WebSocket RPC URL (default: ws://127.0.0.1:9944)
  • --keystore-uri <URI>: Keystore URI to use (default: ./keystore)
  • --request-id <ID>: Request ID to respond to

Submit Job

Submits a job to a blueprint service.

cargo tangle blueprint submit [OPTIONS]

Options:

  • --ws-rpc-url <URL>: WebSocket RPC URL (default: ws://127.0.0.1:9944)
  • --service-id <ID>: Service ID to submit the job to
  • --blueprint-id <ID>: Blueprint ID to submit the job to
  • --keystore-uri <URI>: Keystore URI to use
  • --job <JOB_ID>: Job ID to submit
  • --params-file <FILE>: Optional path to JSON file containing job parameters
  • --watcher: Whether to wait for the job to complete

Job Submission Flow:

Job ID + Parameters
Route Match
Process Job
Return to Client
Client
Blueprint Router
Job Handler
Job Result

Informational Commands

These commands provide information about blueprints and service requests.

Environment Variables and Configuration

Environment Variables

The following environment variables can be used with blueprint commands:

Variable Description Default
WS_RPC_URL WebSocket RPC URL for Tangle ws://127.0.0.1:9944
HTTP_RPC_URL HTTP RPC URL for Tangle http://127.0.0.1:9944
KEYSTORE_URI Path to keystore ./keystore
PODMAN_HOST Podman host for containerized blueprints unix:///var/run/docker.sock
NAME Name for blueprint creation -
BLUEPRINT_ID Blueprint ID for settings file -
SERVICE_ID Service ID for settings file -

Sources: 201-231 737-821

Protocol Settings

Protocol-specific settings files can be provided when running blueprints. The format depends on the protocol:

Tangle Protocol Settings

BLUEPRINT_ID=0
SERVICE_ID=0

Eigenlayer Protocol Settings

Eigenlayer Protocol Settings

ALLOCATION_MANAGER_ADDRESS=0x...
REGISTRY_COORDINATOR_ADDRESS=0x...
OPERATOR_STATE_RETRIEVER_ADDRESS=0x...
DELEGATION_MANAGER_ADDRESS=0x...
SERVICE_MANAGER_ADDRESS=0x...
STAKE_REGISTRY_ADDRESS=0x...
STRATEGY_MANAGER_ADDRESS=0x...
STRATEGY_ADDRESS=0x...
AVS_DIRECTORY_ADDRESS=0x...
REWARDS_COORDINATOR_ADDRESS=0x...
PERMISSION_CONTROLLER_ADDRESS=0x...

Blueprint Command Relationships

Blueprint Command System
Information
Job Management
Service Management
Creation & Deployment
list-blueprints
cargo-tangle CLI
Blueprint Commands
create
deploy
run
register
request-service
accept-request
reject-request
submit-job
list-requests

Common Workflows for Blueprint Development

Common Blueprint Development Workflow

A typical workflow for developing and deploying a blueprint follows these steps:

Create Blueprint
Build Blueprint
Test Blueprint
Deploy Blueprint
Run Blueprint
Register as Operator
Request Service
Accept Request
Submit Job
  1. Create Blueprint :
cargo tangle blueprint create --name my_blueprint
  1. Build Blueprint :
cd my_blueprint
cargo build
  1. Deploy Blueprint :
cargo tangle blueprint deploy tangle --ws-rpc-url wss://rpc.tangle.tools
  1. Register as Operator (optional):
cargo tangle blueprint register --blueprint-id 0
  1. Request Service (as user):
cargo tangle blueprint request-service --blueprint-id 0 --value 100
  1. Accept Request (as operator):
cargo tangle blueprint accept-request --request-id 0

Consultation and Further Research

  • Ask Devin about tangle-network/blueprint
  • Conduct deep research

Documentation

Introduction to Tangle Network's Blueprint Framework

Tangle Network's Blueprint Framework Overview

Additional Resources

CI/CD Pipeline Overview

CI/CD Pipeline Overview

Overview of CI/CD Pipeline

The Tangle Blueprint framework uses GitHub Actions for continuous integration, automatically running checks on pull requests and commits to the main branch.

CI Pipeline
Success
Failure
Pull Request/Push
GitHub Actions
Parallel Jobs
Formatting Check
Code Linting
Test Matrix Generation
Package Tests
CI Results
Ready for Merge
Requires Fixes

GitHub Actions Workflow

The CI workflow is defined in the .github/workflows/ci.yml file and consists of several distinct jobs that run in parallel.

Workflow Triggers

The CI pipeline is triggered by:

  • Pull requests to the main branch
  • Pushes to the main branch
  • Manual workflow dispatch
PR to main
Push to main
Manual dispatch
Job "formatting"
Job "linting"
Job "generate-matrix"
Job "testing" (matrix)
Inactive
Running
ValidateFormatting
ValidateLinting
GenerateMatrix
RunTests
Complete

Job: Code Formatting

The formatting job ensures consistent code style across the codebase using Rust's nightly formatter.

  • Runner : Ubuntu latest
  • Rust Toolchain : Nightly with rustfmt
  • Command : cargo +nightly fmt -- --check

Development Workflow Integration

The CI/CD pipeline integrates with the broader development workflow for the Tangle Blueprint framework.

Development Workflow
Pass
Fail
Approved
Local Development
Local Formatting
Local Linting
Local Testing
Push Changes
CI Pipeline
Code Review
Fix Issues
Merge to main

Future Improvements

  • Fix Tangle Node Testing
  • Integrate Doc Tests with nextest
  • Enhanced Build Caching

Job Configurations

Job: Matrix Generation

This job dynamically generates a test matrix by analyzing the cargo workspace to identify all packages.

  • Runner: Ubuntu latest
  • Command: Uses cargo metadata and jq to extract package names
  • Output: JSON array of package names used by the testing job
Matrix Generation
Checkout code
Run cargo metadata
Extract package names with jq
Output matrix for test job

Job: Testing

The testing job runs unit and integration tests for each package in the workspace, using the matrix generated by the previous job.

  • Runner: Ubuntu latest
  • Dependencies: Foundry, Rust stable, nextest
  • Matrix: Runs separate jobs for each package
  • Testing Strategy:
    • Determines whether to run tests in parallel or serially based on the package
    • Uses nextest for faster test execution
    • Also runs doc tests with the standard test runner
  • Timeout: 30 minutes per package
Test Profiles
Package Testing
Most packages
Selected packages
Matrix Setup
Checkout code
Install dependencies
Determine test profile
Run nextest
Run doc tests
Parallel tests (ci profile)
Serial tests (serial profile)

Development Environment Setup

Development Environment

The Tangle Blueprint framework uses Nix Flakes to create a reproducible development environment that closely mirrors the CI environment.

Nix Development Shell

The development environment is defined in flake.nix and includes all necessary tools and dependencies for building, testing, and linting the codebase.

Development Environment
flake.nix
Development Shell
Build Dependencies
pkg-config
clang/libclang
openssl
gmp
protobuf
mold linker (Linux)
Development Tools
Rust Toolchain
Foundry
rust-analyzer
cargo-nextest
cargo-expand
cargo-dist
Node.js 22
Yarn

Sources: 1-87

Key Development Tools

Key Development Tools

  1. cargo-nextest: A modern test runner that provides faster test execution by running tests in parallel.
  2. cargo-expand: A tool for debugging and understanding macros by showing expanded code.
  3. cargo-dist: A tool for creating distributable packages.
  4. Foundry: A development environment for Ethereum smart contracts.
  5. rust-analyzer: A language server for Rust that provides IDE features.

Local CI Validation

Developers can run the same checks locally that are performed in CI to validate changes before pushing.

Code Formatting

cargo +nightly fmt -- --check

Linting and Testing Practices

Job: Code Linting

  • Runner: Ubuntu latest
  • Dependencies: Foundry, Rust stable, protobuf, libgmp
  • Command: cargo clippy --tests --examples -- -D warnings
  • Timeout: 120 minutes

Linting Command

cargo clippy --tests --examples -- -D warnings

Testing Commands

cargo nextest run --package <package-name>
cargo test --package <package-name> --doc

Limitations and Future Improvements

  1. Tangle Node Testing: The test job for the "incredible-squaring-tangle" example is currently disabled due to issues with the Tangle node in CI.
  2. Documentation Tests: Currently using the standard test runner for documentation tests as nextest doesn't support doc tests yet.
  3. Build Caching: Opportunities for additional caching to speed up CI runs.

Conclusion

Conclusion

The CI/CD pipeline for the Tangle Blueprint framework provides automated validation for code changes, ensuring consistent quality across the codebase. The combination of formatting checks, linting, and comprehensive testing helps maintain high code quality standards and prevents regressions as the codebase evolves. Developers can use the Nix development environment to run the same checks locally, creating a seamless workflow between local development and CI validation.

Documentation

Introduction to Tangle Network's Blueprint Framework

No significant content available.

Getting Started

Getting Started with Tangle Network Blueprint

Core Architecture

Sources

Note

  • Auto-refresh not enabled yet.

Advanced Features

Advanced Topics

Networking Extensions

Blueprint's networking layer can be extended with specialized protocols to support advanced use cases like signature aggregation and round-based consensus.

Signature Aggregation

Signature aggregation allows multiple signatures to be combined into a single, verifiable signature, which is essential for:

  • Threshold signature schemes
  • Multi-signature wallets
  • Consensus mechanisms requiring signature aggregation
  • Reducing on-chain verification costs

Advanced Storage Solutions

Blueprint provides extensible storage mechanisms through the blueprint-stores module.

Local Database Store

The blueprint-store-local-database provides persistent local storage for blueprints.

Usage Flow:

  • Stores data via blueprint-stores
  • Persists using File-Based Storage
  • JSON Serialization

Implementation Details:

  • Data Layout: Blueprint
  • Local Filesystem

Relevant Source Files

Aggregated Signature Gossip Extension

Aggregated Signature Gossip Extension

The Aggregated Signature Gossip Extension provides a specialized protocol for efficiently aggregating cryptographic signatures across network participants, particularly useful for threshold signature schemes and BLS signature aggregation.

Key Features

  • Participant management for tracking network peers involved in aggregation
  • Efficient signature collection and verification
  • Threshold detection for signature completion
  • Gossip-based signature propagation over libp2p
  • Support for BLS and BN254 signature schemes

Implementation

To implement the extension, enable the "aggregation" feature in the crypto crates:

blueprint-crypto = { features = ["aggregation"] }
blueprint-crypto-bls = { features = ["aggregation"] }
blueprint-crypto-bn254 = { features = ["aggregation"] }

Components

  • Gossip Protocol
  • ParticipantManager
  • SignatureAggregator
  • GossipHandler
  • Participants Registry
  • Signature Collection
  • Network Service
  • Signature Topic
  • P2P Network

Round-Based Protocol Extension

Round-Based Protocol Extension

The Round-Based Protocol Extension provides infrastructure for implementing round-based consensus protocols and distributed algorithms. It integrates with the round-based crate to simplify development of multi-round protocols.

Key Features:

  • State management for multi-round protocols
  • Message serialization and routing
  • Round advancement and timeout handling
  • Integration with Blueprint's networking layer

Components:

  • Round Manager
  • Message Handler
  • Protocol State
  • Protocol Rounds
  • Round Messages
  • Network Service
  • User-Defined Protocol
  • DKG Protocol
  • Threshold Signing

Sources: 1-70

Macro System Overview

Macro System

The Blueprint framework provides a powerful macro system to simplify development and reduce boilerplate code.

Blueprint Macros

The blueprint-macros crate provides procedural macros for enhancing Blueprint development. Key macros include:

  • job: Defines job handlers with automatic type conversion and error handling
  • job_id: Creates type-safe job identifiers
  • blueprint: Sets up the blueprint structure and lifecycle hooks
  • context: Defines context extensions for dependency injection
Target Code
Macro Usage Flow
Blueprint Macros
Annotated with
Generates
Uses
Annotated with
Generates
Annotated with
Generates
Integrated with
Used by
Enhances
blueprint-macros
job Macro
blueprint Macro
job_id Macro
context Macro
Job Definition
Job Implementation
Blueprint Definition
Blueprint Implementation
Context Definition
Context Extension
Router
Blueprint Runner

Context Derive Macros

The blueprint-context-derive crate provides specialized derive macros for context extensions:

  • Automatic implementation of context access traits
  • Protocol-specific context extensions
  • Compile-time verification of context requirements
Generated Code
User Structs
Context Derive Macros
Derives
Optionally derives
Optionally derives
Optionally derives
Implements
Implements
Implements
Implements
blueprint-context-derive
Derive Context
Derive EVM Context
Derive Tangle Context
Derive Network Context
User-Defined Struct
Context Access Traits
EVM-Specific Access
Tangle-Specific Access
Network-Specific Access

Cryptography Features

Advanced Cryptography

The Blueprint framework supports a wide range of cryptographic primitives through its modular crypto architecture.

Cryptographic Schemes

The blueprint-crypto package serves as a metapackage that integrates various cryptographic implementations:

Scheme Crate Features Use Cases
K256 blueprint-crypto-k256 EVM signatures Ethereum compatibility
SR25519 blueprint-crypto-sr25519 Schnorrkel Tangle compatibility
ED25519 blueprint-crypto-ed25519 Zebra General purpose
BLS blueprint-crypto-bls Signature aggregation Threshold signatures
BN254 blueprint-crypto-bn254 Pairing-based crypto Zero-knowledge proofs

This modular structure allows developers to include only the cryptographic primitives required for their specific use case.

Signature Aggregation

A key advanced feature is signature aggregation, enabled through the aggregation feature:

[features]
aggregation = [
    "blueprint-crypto-sp-core/aggregation",
    "blueprint-crypto-bls/aggregation",
    "blueprint-crypto-bn254/aggregation",
]

Local Database Storage Solutions

Key Features of the Local Database Store

  • Persistent storage between blueprint restarts
  • JSON serialization for data storage
  • Filesystem-based storage solution
  • Simple API for data storage and retrieval

Additional Job Producers

Cron Jobs

The blueprint-producers-extra crate provides a cron job producer that enables scheduled execution of jobs:

Integration
Dependencies
Cron Job System
Enables
Provides
Uses
Runs on
Produces
Processed by
Executes
blueprint-producers-extra
cron feature
Cron Job Producer
tokio-cron-scheduler
chrono
tokio
Job Calls
Router
Job Handlers

Cron Job Scheduling

To use the cron job producer, enable the cron feature:

blueprint-producers-extra = { features = ["cron"] }

The cron job producer allows:

  • Scheduling jobs using cron expressions
  • Recurring job execution
  • Time-based automation within blueprints

EVM Extensions

The blueprint-evm-extra package provides advanced utilities for working with EVM-compatible blockchains.

Key features include:

  • Enhanced EVM client capabilities
  • Pubsub functionality for blockchain events
  • Advanced utilities for smart contract interaction
  • Transaction management tools

Testing Utilities

Testing Utilities

For advanced testing scenarios, Blueprint provides specialized testing utilities.

blueprint-testing-utils
blueprint-core-testing-utils
blueprint-anvil-testing-utils
blueprint-tangle-testing-utils
blueprint-eigenlayer-testing-utils

These testing utilities allow:

  • Protocol-specific test environments
  • Deterministic testing of blockchain interactions
  • Simulated network conditions
  • Integration testing across multiple protocols

Feature Flags:

  • anvil feature
  • tangle feature
  • eigenlayer feature

Integrating Custom Protocols

Custom Protocol Integration

Blueprint's modular architecture allows for integration with custom blockchain protocols beyond the built-in support for Tangle, EVM, and Eigenlayer.

The custom protocol integration process typically involves:

  1. Implementing client interfaces for the new protocol
  2. Adding cryptographic primitives required by the protocol
  3. Extending the networking layer if needed
  4. Creating protocol-specific context extensions
  5. Developing custom job handlers for protocol interactions
Blueprint Integration
Implementation Steps
Protocol Integration
Implement
Extend
Registers with
Used by
Enhances
Client Implementation
Crypto Integration
Network Extension
Client Traits
Crypto Traits
Network Service
Blueprint Runner
Key Store
P2P Network
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment