Skip to content

Instantly share code, notes, and snippets.

@shawnyeager
Last active February 23, 2026 11:22
Show Gist options
  • Select an option

  • Save shawnyeager/decd99be27eb052b68f0586d36038404 to your computer and use it in GitHub Desktop.

Select an option

Save shawnyeager/decd99be27eb052b68f0586d36038404 to your computer and use it in GitHub Desktop.
BUD-XX: Reciprocal Mirroring for Blossom — Draft Architecture (seeking feedback)

BUD-XX: Reciprocal Mirroring for Blossom

Status: Draft — seeking feedback from Blossom and Nostr developers Date: 2026-02-22

The Problem

Blossom has solid primitives for mirroring — PUT /mirror is lightweight, client-driven, and works well. A client can already upload a blob and mirror it to N servers with N simple HTTP calls. That part scales fine.

What's missing is reciprocity and accountability:

  • No mutual storage agreements. If I want my blobs stored on your server and your blobs stored on mine, there's no protocol for that arrangement. I either run my own servers, pay a commercial CDN, or trust someone's goodwill.
  • No proof of ongoing storage. After mirroring a blob, there's no way to verify the other server still holds it 6 months later. Servers can silently drop content with no consequence.
  • No automated coordination between independent operators. Distributed server implementations exist (one operator, multiple nodes), but there's no mechanism for separate operators to form bilateral storage commitments.

This proposal adds a reciprocity layer on top of existing Blossom primitives: verifiable bilateral agreements, automated PUT /mirror calls triggered by Nostr events, and periodic proof-of-storage challenges.

What This Is NOT

This is not a replacement for client-side mirroring. PUT /mirror works great for users who control their own servers or use commercial providers. This is for a different use case: independent server operators who want mutual redundancy through reciprocal agreements — "I'll store 500MB of yours if you store 500MB of mine."

It's also not a distributed server implementation. Those are single-operator, multi-node setups. This is multi-operator, each running their own Blossom server, forming bilateral peering agreements.

How It Works

The protocol introduces a sidecar daemon that runs alongside any existing Blossom server. The daemon handles all coordination — the Blossom server itself is completely unchanged. No modifications, no new endpoints, no WebSocket support required on the server.

The daemon speaks Nostr (via relays) to coordinate with peers and HTTP to talk to the local Blossom server.

graph TB
    subgraph "Alice's Infrastructure"
        A_Server["Blossom Server<br/>(unchanged, any implementation)"]
        A_Daemon["blossom-cdn daemon<br/>(sidecar)"]
        A_DB["SQLite"]
        A_Daemon -- "HTTP: GET, PUT /mirror" --> A_Server
        A_Daemon --> A_DB
    end

    subgraph "Nostr Relays"
        R1["Relay Pool"]
    end

    subgraph "Bob's Infrastructure"
        B_Server["Blossom Server<br/>(unchanged, any implementation)"]
        B_Daemon["blossom-cdn daemon<br/>(sidecar)"]
        B_DB["SQLite"]
        B_Daemon -- "HTTP: GET, PUT /mirror" --> B_Server
        B_Daemon --> B_DB
    end

    A_Daemon -- "publish & subscribe" --> R1
    B_Daemon -- "publish & subscribe" --> R1
    B_Server -- "PUT /mirror fetches blob" --> A_Server
    A_Server -- "PUT /mirror fetches blob" --> B_Server
Loading

Important distinction: Only the sidecar daemon requires Nostr relay connectivity. The Blossom server has zero new requirements — it continues to serve blobs and handle PUT /mirror exactly as it does today.

Protocol Flows

1. Agreement Establishment

Both parties independently publish a parameterized replaceable event declaring their side of the agreement. When both exist, the agreement is active.

sequenceDiagram
    participant Alice as Alice's Daemon
    participant Relay as Nostr Relay
    participant Bob as Bob's Daemon

    Alice->>Relay: Publish kind 31120<br/>d=bob-pubkey, quota=500MB,<br/>server=alice-blossom.com
    Bob->>Relay: Publish kind 31120<br/>d=alice-pubkey, quota=500MB,<br/>server=bob-blossom.com

    Relay-->>Bob: Alice's agreement event
    Relay-->>Alice: Bob's agreement event

    Note over Alice: Bilateral match!<br/>effective_quota = min(500MB, 500MB)
    Note over Bob: Bilateral match!<br/>effective_quota = 500MB

    Note over Alice,Bob: Agreement ACTIVE
Loading

Either party revokes by deleting their event. Effective quota = min of both offers (symmetric — both sides mirror the same amount).

2. Blob Sync

When a user uploads to their Blossom server, the daemon learns about the new blob, publishes an announcement, and peers call PUT /mirror automatically.

Upload detection is the one integration point between daemon and server. Two approaches:

Approach How it works Trade-off
Client publishes announcement User's Nostr client publishes a kind 7374 blob announcement after uploading. Daemon only watches relays. Cleanest — no dependency on GET /list. Requires client support.
Daemon polls GET /list Daemon periodically polls the local server's GET /list/{pubkey} endpoint. Works today with no client changes, but GET /list is marked "optional and unrecommended" in BUD-02.

Either way, the Blossom server is untouched. The sync flow:

sequenceDiagram
    participant User as Alice (User)
    participant A_Srv as Alice's Blossom Server
    participant A_Dmn as Alice's Daemon
    participant Relay as Nostr Relay
    participant B_Dmn as Bob's Daemon
    participant B_Srv as Bob's Blossom Server

    User->>A_Srv: PUT /upload (photo.jpg)
    A_Srv-->>User: Blob Descriptor

    Note over A_Dmn: Learns about new blob<br/>(via client event or /list poll)

    A_Dmn->>Relay: Publish kind 7374<br/>Blob Announcement<br/>{x, size, server}

    Relay-->>B_Dmn: Blob announcement from Alice
    B_Dmn->>B_Dmn: Check: used + size <= quota?
    B_Dmn->>B_Dmn: Sign kind 24242 auth<br/>(daemon's own key, t=upload, x=sha256)
    B_Dmn->>B_Srv: PUT /mirror<br/>{url: alice-server/sha256}<br/>Authorization: Nostr base64(auth)
    B_Srv->>A_Srv: GET /sha256
    A_Srv-->>B_Srv: blob data
    B_Srv-->>B_Dmn: Blob Descriptor (mirrored)
    B_Dmn->>B_Dmn: Update quota tracker
Loading

Each daemon has its own keypair, authorized as an uploader on its local Blossom server. When a peer's blob announcement arrives, the daemon signs its own kind 24242 auth and calls PUT /mirror on the local server. No auth events need to travel over relays. The actual mirroring uses existing PUT /mirror — no new endpoints.

3. Proof of Storage

This is the core feature that doesn't exist anywhere in Blossom today. Peers periodically challenge each other to prove they still hold the data. Challenges and responses are ephemeral Nostr events.

sequenceDiagram
    participant A_Dmn as Alice's Daemon
    participant Relay as Nostr Relay
    participant B_Dmn as Bob's Daemon
    participant B_Srv as Bob's Blossom Server

    Note over A_Dmn: 24h cycle: challenge random blob

    A_Dmn->>A_Dmn: Pick random blob, random byte range<br/>Compute expected proof locally
    A_Dmn->>Relay: kind 21122 Challenge<br/>{x, offset, length, nonce}

    Relay-->>B_Dmn: Challenge from Alice

    B_Dmn->>B_Srv: GET /sha256 Range: bytes=offset-end
    B_Srv-->>B_Dmn: byte range
    B_Dmn->>B_Dmn: SHA-256(bytes)
    B_Dmn->>Relay: kind 21123 Response<br/>{e=challenge-id, proof}

    Relay-->>A_Dmn: Response from Bob
    A_Dmn->>A_Dmn: Compare proof vs expected

    alt Match
        Note over A_Dmn: PASS
    else Mismatch or timeout
        Note over A_Dmn: FAIL — 3 consecutive = agreement lapses
    end
Loading

4. Quota Overflow

When mirrored bytes hit the quota, the daemon stops mirroring and publishes a notification so the peer knows.

5. Report Propagation

When a BUD-09 report (kind 1984) is filed against a blob, the daemon forwards it to all peers mirroring that blob via the agreement relay. Each operator's configured policy determines the response (auto-remove, manual review, ignore).

Event Kinds

Kind Type Stored? Name Purpose
31120 Parameterized Replaceable Yes Mirror Agreement Bilateral storage agreement. d tag = peer pubkey.
7374 Regular Yes Blob Announcement New content available for mirroring.
7375 Regular Yes Quota Notification Peer has hit their quota limit.
21122 Ephemeral No PoS Challenge "Prove you store blob X, bytes N-M."
21123 Ephemeral No PoS Response Cryptographic proof (SHA-256 of byte range).

All kind numbers are provisional.

Why these ranges: Announcements and notifications are regular events (kind < 20000) so relays store them — daemons can catch up after downtime using since filters. PoS events are ephemeral (20000+) because they're real-time request/response and don't need persistence.

Event Structures

Mirror Agreement (kind 31120)

{
  "kind": 31120,
  "pubkey": "<alice-hex-pubkey>",
  "content": "",
  "tags": [
    ["d", "<bob-hex-pubkey>"],
    ["p", "<bob-hex-pubkey>"],
    ["quota", "524288000"],
    ["server", "https://alice-blossom.example.com"],
    ["relay", "wss://relay.example.com"],
    ["expiration", "1756684800"]
  ]
}
  • d = peer's pubkey (one agreement per peer)
  • quota = bytes this party offers to mirror (effective = min of both)
  • server = this party's Blossom server URL
  • relay = where this party publishes blob announcements (peer subscribes here)
  • expiration = NIP-40 unix timestamp

Blob Announcement (kind 7374)

{
  "kind": 7374,
  "pubkey": "<blob-owner-hex-pubkey>",
  "content": "",
  "tags": [
    ["x", "b1674191a88ec5cdd733e4240a81803105dc412d6c6708d53ab94fc248f4f553"],
    ["size", "184292"],
    ["m", "image/jpeg"],
    ["server", "https://alice-blossom.example.com"]
  ]
}

No auth tag is needed. Each daemon has its own keypair authorized on its local Blossom server. When a daemon receives an announcement, it signs its own kind 24242 auth to call PUT /mirror locally. This means operators only need to whitelist one key (their daemon's) regardless of how many peers they have.

PoS Challenge (kind 21122)

{
  "kind": 21122,
  "pubkey": "<challenger-hex-pubkey>",
  "content": "",
  "tags": [
    ["p", "<peer-hex-pubkey>"],
    ["x", "<blob-sha256>"],
    ["offset", "1024"],
    ["length", "1024"],
    ["nonce", "<random-hex>"]
  ]
}

PoS Response (kind 21123)

{
  "kind": 21123,
  "pubkey": "<responder-hex-pubkey>",
  "content": "",
  "tags": [
    ["p", "<challenger-hex-pubkey>"],
    ["e", "<challenge-event-id>"],
    ["proof", "<sha256-of-requested-byte-range>"]
  ]
}

Security Considerations

Vector Mitigation
Daemon holds its own signing key Daemon keypair is separate from the operator's personal Nostr identity. Config with restricted permissions (0600). Future: NIP-46 remote signer.
Agreement spam Daemon only processes events from pubkeys it has also published agreements for. Unsolicited ignored.
PoS challenge DoS Challenges are signed Nostr events. Only processed from active agreement partners.
Announcement spoofing Events are signature-verified. Invalid sigs discarded.

Design Decisions & Rationale

Decision Why
Sidecar daemon, not server modification Blossom servers stay untouched. No new requirements on server implementations. WebSocket/relay complexity is isolated to the daemon.
Daemon-owned keypairs Each daemon has its own key, authorized on the local Blossom server. No auth events on public relays. Operators whitelist one key regardless of peer count.
PoS via Nostr events (not HTTP) Daemon needs zero public endpoints. Works behind NAT. No daemon discovery problem.
Quota as the only content boundary YAGNI — no filtering rules in v1. Simplest useful thing.
Client-side announcement (preferred) Avoids dependency on GET /list/{pubkey}. User's Nostr client already knows about the upload. Daemon polls /list as a fallback for clients that don't support announcements.

What's NOT in Scope (v1)

  • Payment/incentives beyond reciprocity
  • Content filtering rules
  • Automatic peer discovery (peers chosen explicitly)
  • Multi-user server support (operator = blob owner)
  • Geographic distribution awareness
  • Encryption of mirrored content

Reference Implementation

A TypeScript/Bun sidecar daemon that orchestrates all of the above. Communicates with any Blossom server via HTTP and with peers via Nostr relays. Persists state in SQLite.

Repository: [github.com/... TBD]

Open Questions for Discussion

  1. Kind number allocation — The numbers above are provisional. What ranges make sense for the Blossom/Nostr ecosystem?

  2. Upload detection — The preferred approach is client-side blob announcements (kind 7374), with GET /list polling as a fallback. Would a lightweight BUD for upload notification hooks be worth proposing? Or is the client-side approach sufficient?

  3. PoS via relays — Ephemeral events for challenge/response avoid HTTP endpoints but require both daemons to be online simultaneously. Is the 24h challenge window sufficient?

  4. Is reciprocity a real need? — This protocol assumes independent operators want mutual storage guarantees. Is this a common enough use case, or do most operators prefer commercial CDNs or running their own distributed servers?

Feedback Welcome

This is an early draft. The core thesis is that reciprocal storage agreements and proof of ongoing storage are missing primitives in the Blossom ecosystem. If that premise is wrong, the rest doesn't matter — so that's the most valuable thing to challenge.

Looking for input on:

  • Whether the reciprocity use case resonates with server operators
  • Event kind choices and tag structure
  • The sidecar daemon approach vs. alternatives
  • Anything that would make this harder to implement or adopt

Relevant specs: BUD-01 | BUD-02 | BUD-04 | BUD-09 | NIP-B7

@shawnyeager
Copy link
Author

shawnyeager commented Feb 23, 2026

Thanks to @pippellia-btc for valuable feedback that shaped this revision:

  • Reframed the problem statement — PUT /mirror client-side scales fine. The actual gap is reciprocity and accountability: no mutual storage agreements, no proof of ongoing storage, no coordination between independent operators.
  • Clarified the sidecar architecture — Only the daemon speaks WS. The Blossom server itself is completely unchanged — no WebSocket support, no new endpoints, no modifications whatsoever.
  • Upload detection options — Client-side blob announcements (kind 7374) as the preferred approach, with GET /list polling as a fallback. Added a table comparing both trade-offs.
  • Added 'What This Is NOT' section — Explicitly scoping this as a reciprocity layer for independent operators, not a replacement for client-side mirroring.

@shawnyeager
Copy link
Author

Thanks to @flox1an for feedback on the auth model. Key insight: if each daemon has its own keypair authorized on its local Blossom server, there's no need to relay auth events at all. The daemon signs its own PUT /mirror auth locally, operators whitelist one key regardless of peer count, and no auth events sit on public relays.

This revision removes the auth tag from blob announcements, moves auth signing to the receiving daemon, and updates the security model accordingly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment