Skip to content

Instantly share code, notes, and snippets.

@Emmanuel-Bamidele
Last active April 6, 2026 07:51
Show Gist options
  • Select an option

  • Save Emmanuel-Bamidele/5a46631702518ddf88fc267c9c52e360 to your computer and use it in GitHub Desktop.

Select an option

Save Emmanuel-Bamidele/5a46631702518ddf88fc267c9c52e360 to your computer and use it in GitHub Desktop.
SupaVector: LLM memory

SupaVector

We just open-sourced SupaVector: a self-hosted memory and retrieval platform for AI applications and agents.

What it does

SupaVector gives AI systems a persistent memory layer so they can:

  • ingest documents and notes
  • store and retrieve relevant context
  • run grounded Q&A over stored knowledge
  • manage longer-term memory in a more structured way

How it’s built

The current stack combines:

  • a C++ vector store for similarity search
  • a Node.js gateway for APIs, auth, docs UI, and jobs
  • Postgres for metadata, auth, tenant settings, and memory state

Why we built it

A lot of AI apps need more than one-shot prompts. They need a reliable way to keep context over time, retrieve the right information when needed, and stay deployable inside your own environment.

SupaVector is our take on that: a self-hosted memory engine you can run yourself and plug into your agents, backends, or apps.

Current scope

Right now, the public repo is focused on:

  • single-node self-hosted deployment
  • your own environment
  • your own model/provider credentials

Extras

It also includes:

  • a CLI for setup and local operations
  • support for write / search / ask workflows
  • provider support centered on OpenAI, with Gemini and Anthropic support where available

Repo

GitHub: https://github.com/Emmanuel-Bamidele/supavector

We’d really love feedback—especially on the developer experience, architecture, and where this could be most useful in real AI products.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment