Status: Draft — seeking feedback from Blossom and Nostr developers Date: 2026-02-22
Blossom has solid primitives for mirroring — PUT /mirror is lightweight, client-driven, and works well. A client can already upload a blob and mirror it to N servers with N simple HTTP calls. That part scales fine.
What's missing is reciprocity and accountability:
- No mutual storage agreements. If I want my blobs stored on your server and your blobs stored on mine, there's no protocol for that arrangement. I either run my own servers, pay a commercial CDN, or trust someone's goodwill.
- No proof of ongoing storage. After mirroring a blob, there's no way to verify the other server still holds it 6 months later. Servers can silently drop content with no consequence.
- No automated coordination between independent operators. Distributed server implementations exist (one operator, multiple nodes), but there's no mechanism for separate operators to form bilateral storage commitments.
This proposal adds a reciprocity layer on top of existing Blossom primitives: verifiable bilateral agreements, automated PUT /mirror calls triggered by Nostr events, and periodic proof-of-storage challenges.
This is not a replacement for client-side mirroring. PUT /mirror works great for users who control their own servers or use commercial providers. This is for a different use case: independent server operators who want mutual redundancy through reciprocal agreements — "I'll store 500MB of yours if you store 500MB of mine."
It's also not a distributed server implementation. Those are single-operator, multi-node setups. This is multi-operator, each running their own Blossom server, forming bilateral peering agreements.
The protocol introduces a sidecar daemon that runs alongside any existing Blossom server. The daemon handles all coordination — the Blossom server itself is completely unchanged. No modifications, no new endpoints, no WebSocket support required on the server.
The daemon speaks Nostr (via relays) to coordinate with peers and HTTP to talk to the local Blossom server.
graph TB
subgraph "Alice's Infrastructure"
A_Server["Blossom Server<br/>(unchanged, any implementation)"]
A_Daemon["blossom-cdn daemon<br/>(sidecar)"]
A_DB["SQLite"]
A_Daemon -- "HTTP: GET, PUT /mirror" --> A_Server
A_Daemon --> A_DB
end
subgraph "Nostr Relays"
R1["Relay Pool"]
end
subgraph "Bob's Infrastructure"
B_Server["Blossom Server<br/>(unchanged, any implementation)"]
B_Daemon["blossom-cdn daemon<br/>(sidecar)"]
B_DB["SQLite"]
B_Daemon -- "HTTP: GET, PUT /mirror" --> B_Server
B_Daemon --> B_DB
end
A_Daemon -- "publish & subscribe" --> R1
B_Daemon -- "publish & subscribe" --> R1
B_Server -- "PUT /mirror fetches blob" --> A_Server
A_Server -- "PUT /mirror fetches blob" --> B_Server
Important distinction: Only the sidecar daemon requires Nostr relay connectivity. The Blossom server has zero new requirements — it continues to serve blobs and handle PUT /mirror exactly as it does today.
Both parties independently publish a parameterized replaceable event declaring their side of the agreement. When both exist, the agreement is active.
sequenceDiagram
participant Alice as Alice's Daemon
participant Relay as Nostr Relay
participant Bob as Bob's Daemon
Alice->>Relay: Publish kind 31120<br/>d=bob-pubkey, quota=500MB,<br/>server=alice-blossom.com
Bob->>Relay: Publish kind 31120<br/>d=alice-pubkey, quota=500MB,<br/>server=bob-blossom.com
Relay-->>Bob: Alice's agreement event
Relay-->>Alice: Bob's agreement event
Note over Alice: Bilateral match!<br/>effective_quota = min(500MB, 500MB)
Note over Bob: Bilateral match!<br/>effective_quota = 500MB
Note over Alice,Bob: Agreement ACTIVE
Either party revokes by deleting their event. Effective quota = min of both offers (symmetric — both sides mirror the same amount).
When a user uploads to their Blossom server, the daemon learns about the new blob, publishes an announcement, and peers call PUT /mirror automatically.
Upload detection is the one integration point between daemon and server. Two approaches:
| Approach | How it works | Trade-off |
|---|---|---|
| Client publishes announcement | User's Nostr client publishes a kind 7374 blob announcement after uploading. Daemon only watches relays. | Cleanest — no dependency on GET /list. Requires client support. |
| Daemon polls GET /list | Daemon periodically polls the local server's GET /list/{pubkey} endpoint. |
Works today with no client changes, but GET /list is marked "optional and unrecommended" in BUD-02. |
Either way, the Blossom server is untouched. The sync flow:
sequenceDiagram
participant User as Alice (User)
participant A_Srv as Alice's Blossom Server
participant A_Dmn as Alice's Daemon
participant Relay as Nostr Relay
participant B_Dmn as Bob's Daemon
participant B_Srv as Bob's Blossom Server
User->>A_Srv: PUT /upload (photo.jpg)
A_Srv-->>User: Blob Descriptor
Note over A_Dmn: Learns about new blob<br/>(via client event or /list poll)
A_Dmn->>Relay: Publish kind 7374<br/>Blob Announcement<br/>{x, size, server}
Relay-->>B_Dmn: Blob announcement from Alice
B_Dmn->>B_Dmn: Check: used + size <= quota?
B_Dmn->>B_Dmn: Sign kind 24242 auth<br/>(daemon's own key, t=upload, x=sha256)
B_Dmn->>B_Srv: PUT /mirror<br/>{url: alice-server/sha256}<br/>Authorization: Nostr base64(auth)
B_Srv->>A_Srv: GET /sha256
A_Srv-->>B_Srv: blob data
B_Srv-->>B_Dmn: Blob Descriptor (mirrored)
B_Dmn->>B_Dmn: Update quota tracker
Each daemon has its own keypair, authorized as an uploader on its local Blossom server. When a peer's blob announcement arrives, the daemon signs its own kind 24242 auth and calls PUT /mirror on the local server. No auth events need to travel over relays. The actual mirroring uses existing PUT /mirror — no new endpoints.
This is the core feature that doesn't exist anywhere in Blossom today. Peers periodically challenge each other to prove they still hold the data. Challenges and responses are ephemeral Nostr events.
sequenceDiagram
participant A_Dmn as Alice's Daemon
participant Relay as Nostr Relay
participant B_Dmn as Bob's Daemon
participant B_Srv as Bob's Blossom Server
Note over A_Dmn: 24h cycle: challenge random blob
A_Dmn->>A_Dmn: Pick random blob, random byte range<br/>Compute expected proof locally
A_Dmn->>Relay: kind 21122 Challenge<br/>{x, offset, length, nonce}
Relay-->>B_Dmn: Challenge from Alice
B_Dmn->>B_Srv: GET /sha256 Range: bytes=offset-end
B_Srv-->>B_Dmn: byte range
B_Dmn->>B_Dmn: SHA-256(bytes)
B_Dmn->>Relay: kind 21123 Response<br/>{e=challenge-id, proof}
Relay-->>A_Dmn: Response from Bob
A_Dmn->>A_Dmn: Compare proof vs expected
alt Match
Note over A_Dmn: PASS
else Mismatch or timeout
Note over A_Dmn: FAIL — 3 consecutive = agreement lapses
end
When mirrored bytes hit the quota, the daemon stops mirroring and publishes a notification so the peer knows.
When a BUD-09 report (kind 1984) is filed against a blob, the daemon forwards it to all peers mirroring that blob via the agreement relay. Each operator's configured policy determines the response (auto-remove, manual review, ignore).
| Kind | Type | Stored? | Name | Purpose |
|---|---|---|---|---|
| 31120 | Parameterized Replaceable | Yes | Mirror Agreement | Bilateral storage agreement. d tag = peer pubkey. |
| 7374 | Regular | Yes | Blob Announcement | New content available for mirroring. |
| 7375 | Regular | Yes | Quota Notification | Peer has hit their quota limit. |
| 21122 | Ephemeral | No | PoS Challenge | "Prove you store blob X, bytes N-M." |
| 21123 | Ephemeral | No | PoS Response | Cryptographic proof (SHA-256 of byte range). |
All kind numbers are provisional.
Why these ranges: Announcements and notifications are regular events (kind < 20000) so relays store them — daemons can catch up after downtime using since filters. PoS events are ephemeral (20000+) because they're real-time request/response and don't need persistence.
{
"kind": 31120,
"pubkey": "<alice-hex-pubkey>",
"content": "",
"tags": [
["d", "<bob-hex-pubkey>"],
["p", "<bob-hex-pubkey>"],
["quota", "524288000"],
["server", "https://alice-blossom.example.com"],
["relay", "wss://relay.example.com"],
["expiration", "1756684800"]
]
}d= peer's pubkey (one agreement per peer)quota= bytes this party offers to mirror (effective = min of both)server= this party's Blossom server URLrelay= where this party publishes blob announcements (peer subscribes here)expiration= NIP-40 unix timestamp
{
"kind": 7374,
"pubkey": "<blob-owner-hex-pubkey>",
"content": "",
"tags": [
["x", "b1674191a88ec5cdd733e4240a81803105dc412d6c6708d53ab94fc248f4f553"],
["size", "184292"],
["m", "image/jpeg"],
["server", "https://alice-blossom.example.com"]
]
}No auth tag is needed. Each daemon has its own keypair authorized on its local Blossom server. When a daemon receives an announcement, it signs its own kind 24242 auth to call PUT /mirror locally. This means operators only need to whitelist one key (their daemon's) regardless of how many peers they have.
{
"kind": 21122,
"pubkey": "<challenger-hex-pubkey>",
"content": "",
"tags": [
["p", "<peer-hex-pubkey>"],
["x", "<blob-sha256>"],
["offset", "1024"],
["length", "1024"],
["nonce", "<random-hex>"]
]
}{
"kind": 21123,
"pubkey": "<responder-hex-pubkey>",
"content": "",
"tags": [
["p", "<challenger-hex-pubkey>"],
["e", "<challenge-event-id>"],
["proof", "<sha256-of-requested-byte-range>"]
]
}| Vector | Mitigation |
|---|---|
| Daemon holds its own signing key | Daemon keypair is separate from the operator's personal Nostr identity. Config with restricted permissions (0600). Future: NIP-46 remote signer. |
| Agreement spam | Daemon only processes events from pubkeys it has also published agreements for. Unsolicited ignored. |
| PoS challenge DoS | Challenges are signed Nostr events. Only processed from active agreement partners. |
| Announcement spoofing | Events are signature-verified. Invalid sigs discarded. |
| Decision | Why |
|---|---|
| Sidecar daemon, not server modification | Blossom servers stay untouched. No new requirements on server implementations. WebSocket/relay complexity is isolated to the daemon. |
| Daemon-owned keypairs | Each daemon has its own key, authorized on the local Blossom server. No auth events on public relays. Operators whitelist one key regardless of peer count. |
| PoS via Nostr events (not HTTP) | Daemon needs zero public endpoints. Works behind NAT. No daemon discovery problem. |
| Quota as the only content boundary | YAGNI — no filtering rules in v1. Simplest useful thing. |
| Client-side announcement (preferred) | Avoids dependency on GET /list/{pubkey}. User's Nostr client already knows about the upload. Daemon polls /list as a fallback for clients that don't support announcements. |
- Payment/incentives beyond reciprocity
- Content filtering rules
- Automatic peer discovery (peers chosen explicitly)
- Multi-user server support (operator = blob owner)
- Geographic distribution awareness
- Encryption of mirrored content
A TypeScript/Bun sidecar daemon that orchestrates all of the above. Communicates with any Blossom server via HTTP and with peers via Nostr relays. Persists state in SQLite.
Repository: [github.com/... TBD]
-
Kind number allocation — The numbers above are provisional. What ranges make sense for the Blossom/Nostr ecosystem?
-
Upload detection — The preferred approach is client-side blob announcements (kind 7374), with
GET /listpolling as a fallback. Would a lightweight BUD for upload notification hooks be worth proposing? Or is the client-side approach sufficient? -
PoS via relays — Ephemeral events for challenge/response avoid HTTP endpoints but require both daemons to be online simultaneously. Is the 24h challenge window sufficient?
-
Is reciprocity a real need? — This protocol assumes independent operators want mutual storage guarantees. Is this a common enough use case, or do most operators prefer commercial CDNs or running their own distributed servers?
This is an early draft. The core thesis is that reciprocal storage agreements and proof of ongoing storage are missing primitives in the Blossom ecosystem. If that premise is wrong, the rest doesn't matter — so that's the most valuable thing to challenge.
Looking for input on:
- Whether the reciprocity use case resonates with server operators
- Event kind choices and tag structure
- The sidecar daemon approach vs. alternatives
- Anything that would make this harder to implement or adopt
Thanks to @flox1an for feedback on the auth model. Key insight: if each daemon has its own keypair authorized on its local Blossom server, there's no need to relay auth events at all. The daemon signs its own
PUT /mirrorauth locally, operators whitelist one key regardless of peer count, and no auth events sit on public relays.This revision removes the
authtag from blob announcements, moves auth signing to the receiving daemon, and updates the security model accordingly.