Created
September 8, 2025 15:09
-
-
Save amard33p/74e66a9e340fb1d89dc4734971f6e48c to your computer and use it in GitHub Desktop.
run postgres backed llm memory
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Official Way According to LangGraph Docs | |
| 1. Short-Term Memory via PostgresSaver | |
| This handles thread-scoped persistence (i.e., remembering the conversation within the current session). Documentation says: | |
| from langgraph.checkpoint.postgres import PostgresSaver | |
| from langgraph.graph import StateGraph | |
| DB_URI = "postgresql://postgres:postgres@localhost:5442/postgres?sslmode=disable" | |
| with PostgresSaver.from_conn_string(DB_URI) as checkpointer: | |
| builder = StateGraph(...) | |
| graph = builder.compile(checkpointer=checkpointer) | |
| You must call checkpointer.setup() once (initial migration) to create the necessary tables. | |
| LangChain | |
| LangChain Docs | |
| There's also an async version, AsyncPostgresSaver, if your flow is async. | |
| LangChain | |
| 2. Long-Term Memory via PostgresStore | |
| If you ever want to persist user data across sessions (not just per-thread memory), LangGraph supports that too—although for your use case, this is optional. | |
| from langgraph.store.postgres import PostgresStore | |
| from langgraph.graph import StateGraph | |
| store = PostgresStore.from_conn_string(DB_URI, ...) | |
| builder = StateGraph(...) | |
| graph = builder.compile(checkpointer=checkpointer, store=store) | |
| You can use embedded semantic search capabilities with this. | |
| LangChain Docs | |
| Medium | |
| How to Integrate into Your Code (Fast, Clean, and Worker-Safe) | |
| You don't need to reinvent the wheel. Just refactor your session/memory logic to use LangGraph's built-in Postgres capabilities. Here's a rough sketch: | |
| memory.py — Replace In-Memory Saver with Postgres | |
| from langgraph.checkpoint.postgres import AsyncPostgresSaver | |
| import os | |
| DB_URI = os.environ["DATABASE_URL"] | |
| async def get_checkpointer(): | |
| saver = await AsyncPostgresSaver.from_conn_string(DB_URI) | |
| await saver.setup() # Run migrations (only first time) | |
| return saver | |
| In Your API Endpoint (simplified): | |
| from your_memory_module import get_checkpointer | |
| from langgraph.graph import StateGraph, MessagesState, START | |
| async def stream_chat_with_memory(...): | |
| checkpointer = await get_checkpointer() | |
| builder = StateGraph(MessagesState) | |
| # ... add nodes, etc. | |
| graph = builder.compile(checkpointer=checkpointer) | |
| async for ev in graph.astream( | |
| {"messages": [{"role": "user", "content": message}]}, | |
| {"configurable": {"thread_id": session_id, "sys_text": sys_text}}, | |
| stream_mode="values", | |
| ): | |
| yield ai_utils.chunk_text(ev["messages"][-1].content) | |
| Why this works: | |
| Shared across all workers—Postgres is centralized, so any worker can read/write session threads. | |
| No race conditions—since the checkpointer handles concurrency at the DB level. | |
| Auto-expiry—you can periodically or lazily prune old sessions if you like, but it's not strictly necessary. | |
| DSL-compatible—you can still use LangGraph’s native APIs (StateGraph, etc.) as intended. | |
| TL;DR | |
| Use PostgresSaver (or AsyncPostgresSaver) for short-term/thread memory. | |
| (Optional) Use PostgresStore for long-term memory if you ever want to persist beyond a single session. | |
| This is exactly how LangGraph recommends production deployments be handled. | |
| LangChain | |
| LangChain Docs | |
| Medium | |
| Want me to rewrite your entire FastAPI handler or memory.py in fully functional form that just swaps in Postgres, keeping your API surface identical? |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment