Skip to content

Instantly share code, notes, and snippets.

@CiprianSpiridon
Last active March 10, 2026 07:12
Show Gist options
  • Select an option

  • Save CiprianSpiridon/2a035502363c092039109e4db79fc443 to your computer and use it in GitHub Desktop.

Select an option

Save CiprianSpiridon/2a035502363c092039109e4db79fc443 to your computer and use it in GitHub Desktop.

Rendel — Platform Overview

Rendel turns any website into clean, LLM-ready Markdown so AI agents can read the web.

What Rendel Does

flowchart TD
    AGENT["&nbsp;<br/>AI Agent<br/>(ChatGPT, Perplexity, etc.)<br/>&nbsp;"]

    AGENT --->|"request a webpage"| RENDEL

    RENDEL["&nbsp;<br/>Rendel<br/>&nbsp;"]

    RENDEL --->|"render & extract content"| SITE

    SITE["&nbsp;<br/>Any Website<br/>(JS-heavy, SPAs, dynamic)<br/>&nbsp;"]

    SITE --->|"raw HTML + JS"| RENDEL
    RENDEL --->|"Markdown pages"| PAGES
    RENDEL --->|"site navigation"| LLMS

    PAGES["&nbsp;<br/>PDP, PLP, Brand, etc.<br/>&nbsp;"]

    LLMS["&nbsp;<br/>llms.txt index<br/>&nbsp;"]

    PAGES ---> AGENT
    LLMS ---> AGENT

    classDef agent fill:#f6f3ea,stroke:#9c7a2b,color:#2b2416,stroke-width:1.5px;
    classDef platform fill:#e7f3e7,stroke:#4d7b52,color:#18311b,stroke-width:2px;
    classDef site fill:#d9e8f5,stroke:#3f6f94,color:#132433,stroke-width:1.5px;
    classDef output fill:#f3e4d7,stroke:#8a5a2b,color:#301d0f,stroke-width:1.5px;

    class AGENT agent;
    class RENDEL platform;
    class SITE site;
    class PAGES,LLMS output;
Loading

The core value: Websites are built for browsers. AI agents need structured text. Rendel bridges that gap — headless Chrome renders the page, Readability extracts the content, and a conversion pipeline produces Markdown with metadata frontmatter.


How Serving Works (Runtime)

Two Lambdas, one Docker image. The API Lambda handles all inbound requests. Cached pages are served in <50ms. On a cache miss, the API Lambda renders the page on the spot (<5s).

flowchart TD
    AGENT["&nbsp;<br/>AI Agent<br/>&nbsp;"]

    AGENT ---> APIGW

    APIGW["&nbsp;<br/>API Gateway<br/>&nbsp;"]

    APIGW ---> API

    API["&nbsp;<br/>API Lambda<br/>512 MB · 30s timeout<br/>&nbsp;"]

    API ---> CACHE

    CACHE{"&nbsp;<br/>Cached?<br/>&nbsp;"}

    CACHE -- "hit" ----> S3

    S3["&nbsp;<br/>S3<br/>Markdown cache<br/>&nbsp;"]

    S3 --->|"< 50ms"| AGENT

    CACHE -- "miss" ----> SITE

    SITE["&nbsp;<br/>Target Website<br/>&nbsp;"]

    SITE --->|"raw HTML"| API
    API --->|"render + extract + convert"| S3

    classDef external fill:#f6f3ea,stroke:#9c7a2b,color:#2b2416,stroke-width:1.5px;
    classDef compute fill:#e7f3e7,stroke:#4d7b52,color:#18311b,stroke-width:1.5px;
    classDef state fill:#f3e4d7,stroke:#8a5a2b,color:#301d0f,stroke-width:1.5px;
    classDef edge fill:#d9e8f5,stroke:#3f6f94,color:#132433,stroke-width:1.5px;
    classDef decision fill:#fff7e6,stroke:#9c7a2b,color:#2b2416,stroke-width:1.5px;

    class AGENT,SITE external;
    class APIGW edge;
    class API compute;
    class S3 state;
    class CACHE decision;
Loading

How Bulk Refresh Works

Customers submit URLs (or a sitemap) to refresh in bulk. The API Lambda fans out work to a queue, and Worker Lambdas render pages in parallel. Progress is tracked in DynamoDB. On completion, llms.txt discovery files are regenerated.

flowchart TD
    CUSTOMER["&nbsp;<br/>Customer<br/>(API call or dashboard)<br/>&nbsp;"]

    CUSTOMER --->|"POST /ingest"| API

    API["&nbsp;<br/>API Lambda<br/>&nbsp;"]

    API --->|"create job"| JOBS
    API --->|"enqueue URL batches"| SQS

    JOBS["&nbsp;<br/>DynamoDB<br/>job tracking<br/>&nbsp;"]

    SQS["&nbsp;<br/>SQS Queue<br/>&nbsp;"]

    SQS ---> WORKER
    SQS -.->|"3 retries failed"| DLQ

    DLQ["&nbsp;<br/>Dead-Letter Queue<br/>&nbsp;"]

    WORKER["&nbsp;<br/>Worker Lambda<br/>1024 MB · 120s timeout<br/>&nbsp;"]

    WORKER --->|"render pages"| SITE

    SITE["&nbsp;<br/>Target Website<br/>&nbsp;"]

    SITE ---> WORKER
    WORKER --->|"store markdown"| S3
    WORKER --->|"update progress"| JOBS

    S3["&nbsp;<br/>S3<br/>Markdown + metadata<br/>&nbsp;"]

    S3 --->|"on completion"| LLMS

    LLMS["&nbsp;<br/>llms.txt<br/>site index<br/>&nbsp;"]

    classDef external fill:#f6f3ea,stroke:#9c7a2b,color:#2b2416,stroke-width:1.5px;
    classDef compute fill:#e7f3e7,stroke:#4d7b52,color:#18311b,stroke-width:1.5px;
    classDef state fill:#f3e4d7,stroke:#8a5a2b,color:#301d0f,stroke-width:1.5px;
    classDef edge fill:#d9e8f5,stroke:#3f6f94,color:#132433,stroke-width:1.5px;

    class CUSTOMER,SITE external;
    class API,WORKER compute;
    class JOBS,SQS,S3,LLMS,DLQ state;
Loading

Full AWS Infrastructure Map

Both Lambdas share one container image from ECR (Go + headless Chrome). Each Lambda includes a LambdaWatch layer that ships logs to Grafana Loki.

flowchart TD
    AGENT["&nbsp;<br/>AI Agents<br/>(ChatGPT, Perplexity, etc.)<br/>&nbsp;"]
    CUSTOMER["&nbsp;<br/>Customer<br/>(API / Dashboard)<br/>&nbsp;"]

    AGENT ---> R53
    CUSTOMER ---> R53

    subgraph DNS["&nbsp; DNS &nbsp;"]
        R53["&nbsp;<br/>Route 53<br/>api.rendel.app<br/>&nbsp;"]
    end

    R53 ---> CF

    subgraph CDN["&nbsp; CDN &nbsp;"]
        CF["&nbsp;<br/>CloudFront<br/>Accept header in cache key<br/>&nbsp;"]
        ACM_CF["&nbsp;<br/>ACM Certificate<br/>*.rendel.app<br/>&nbsp;"]
    end

    ACM_CF -.-> CF

    CF ---> APIGW

    subgraph GATEWAY["&nbsp; API Gateway (HTTP API) &nbsp;"]
        APIGW["&nbsp;<br/>Routes<br/>GET / · POST /ingest · GET /ingest/{id}<br/>DELETE /ingest/{id} · DELETE /purge<br/>GET /usage · GET /health<br/>GET /llms.txt · GET /llms-full.txt<br/>&nbsp;"]
    end

    APIGW ---> API

    subgraph APILAMBDA["&nbsp; API Lambda · 512 MB · 30s · container image &nbsp;"]
        API["&nbsp;<br/>Go Handler<br/>auth · negotiate · cache check<br/>render on miss · metering<br/>&nbsp;"]
        LW1["&nbsp;<br/>LambdaWatch layer<br/>&nbsp;"]
    end

    API --->|"enqueue URL batches<br/>(10 URLs per message)"| SQS

    subgraph MESSAGING["&nbsp; Messaging &nbsp;"]
        SQS["&nbsp;<br/>SQS Standard Queue<br/>visibility timeout: 120s<br/>&nbsp;"]
        DLQ["&nbsp;<br/>SQS Dead-Letter Queue<br/>maxReceiveCount: 3<br/>&nbsp;"]
    end

    SQS ---> WORKER
    SQS -.->|"3 receives failed"| DLQ

    subgraph WORKERLAMBDA["&nbsp; Worker Lambda · 1024 MB · 120s · container image &nbsp;"]
        WORKER["&nbsp;<br/>Go Handler<br/>Chrome render · Readability extract<br/>HTML→Markdown · frontmatter<br/>&nbsp;"]
        CHROME["&nbsp;<br/>headless-shell<br/>~200 MB · 3 concurrent tabs<br/>&nbsp;"]
        LW2["&nbsp;<br/>LambdaWatch layer<br/>&nbsp;"]
    end

    CHROME -.-> WORKER

    API ---> SITE
    WORKER ---> SITE

    SITE["&nbsp;<br/>Target Websites<br/>User-Agent: Rendel/1.0<br/>max 5 concurrent per domain · 1s crawl delay<br/>&nbsp;"]

    API ---> S3_MD
    WORKER ---> S3_MD
    WORKER ---> S3_META

    subgraph S3["&nbsp; S3 Bucket &nbsp;"]
        S3_MD["&nbsp;<br/>cache/{domain}/{sha256}.md<br/>rendered Markdown<br/>&nbsp;"]
        S3_META["&nbsp;<br/>cache/{domain}/_meta/{sha256}.json<br/>title · description · word_count<br/>&nbsp;"]
        S3_LLMS["&nbsp;<br/>cache/{domain}/_llms.txt<br/>cache/{domain}/_llms-full.txt<br/>&nbsp;"]
    end

    S3_META ---> S3_LLMS

    API ---> DDB_KEYS
    API ---> DDB_METER
    API ---> DDB_JOBS
    WORKER ---> DDB_JOBS

    subgraph DYNAMO["&nbsp; DynamoDB &nbsp;"]
        DDB_KEYS["&nbsp;<br/>API Keys table<br/>PK: key_hash (SHA-256)<br/>plan tier · rate limit · active<br/>LRU cached 60s in Lambda<br/>&nbsp;"]
        DDB_METER["&nbsp;<br/>Metering table<br/>PK: api_key · SK: period (2026-03)<br/>renders_used · serves_used<br/>atomic ADD counters<br/>&nbsp;"]
        DDB_JOBS["&nbsp;<br/>Jobs table<br/>PK: job_id · GSI: api_key<br/>status · total · completed · failed<br/>atomic INCREMENT counters<br/>&nbsp;"]
    end

    LW1 -.->|"batch + gzip"| LOKI
    LW2 -.->|"batch + gzip"| LOKI

    subgraph OBS["&nbsp; Observability &nbsp;"]
        LOKI["&nbsp;<br/>Grafana Loki<br/>labels: function_name · region · service_name<br/>&nbsp;"]
        GRAFANA["&nbsp;<br/>Grafana<br/>dashboards · alerts<br/>&nbsp;"]
        CW["&nbsp;<br/>CloudWatch<br/>Lambda metrics · API Gateway metrics<br/>SQS queue depth · DLQ alarm<br/>&nbsp;"]
    end

    LOKI ---> GRAFANA
    APILAMBDA -.-> CW
    WORKERLAMBDA -.-> CW
    SQS -.-> CW

    subgraph DEPLOY["&nbsp; Deployment &nbsp;"]
        ECR["&nbsp;<br/>ECR<br/>rendel:latest<br/>Go binary + headless-shell<br/>multi-arch (arm64 / amd64)<br/>&nbsp;"]
        SAM["&nbsp;<br/>SAM / CloudFormation<br/>template.yaml<br/>infra-as-code<br/>&nbsp;"]
    end

    ECR -.->|"image source"| APILAMBDA
    ECR -.->|"image source"| WORKERLAMBDA
    SAM -.->|"provisions"| GATEWAY
    SAM -.->|"provisions"| APILAMBDA
    SAM -.->|"provisions"| WORKERLAMBDA
    SAM -.->|"provisions"| MESSAGING
    SAM -.->|"provisions"| DYNAMO
    SAM -.->|"provisions"| S3

    classDef edge fill:#d9e8f5,stroke:#3f6f94,color:#132433,stroke-width:1.5px;
    classDef compute fill:#e7f3e7,stroke:#4d7b52,color:#18311b,stroke-width:1.5px;
    classDef state fill:#f3e4d7,stroke:#8a5a2b,color:#301d0f,stroke-width:1.5px;
    classDef external fill:#f6f3ea,stroke:#9c7a2b,color:#2b2416,stroke-width:1.5px;
    classDef observability fill:#f0e6f6,stroke:#7b4f8a,color:#2d1638,stroke-width:1.5px;
    classDef deploy fill:#e8eef5,stroke:#4a6785,color:#1a2a3d,stroke-width:1.5px;

    class AGENT,CUSTOMER,SITE external;
    class R53,CF,ACM_CF,APIGW edge;
    class API,WORKER,CHROME,LW1,LW2 compute;
    class S3_MD,S3_META,S3_LLMS,SQS,DLQ,DDB_KEYS,DDB_METER,DDB_JOBS state;
    class LOKI,GRAFANA,CW observability;
    class ECR,SAM deploy;
Loading
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment