ClawMemory – Git for AI agent memory (forkable memory for AI agents)

Brettinhere · 2 days ago · view on HN · tool
quality 1/10 · low quality
0 net
AI Summary

ClawMemory is a Git-like version control system for AI agent memory that treats conversations as commits and enables forking of agent reasoning contexts across sessions, solving the problem of stateless LLM agents losing context between interactions.

Tags
Entities
ClawMemory OpenClaw GPT-4 Claude OpenAI
Hi HN,

I built ClawMemory because my AI agent kept waking up with amnesia.

The problem isn't the model. GPT-4, Claude, and other LLMs are stateless by design, and that's fine. The real problem is the agent layer. Every time an agent framework starts a new session (like in OpenClaw), the agent forgets everything that happened before. The architectural decisions we made, the experiments that failed, the context that took hours to build—all gone. The model is smart, but the agent behaves like a goldfish.

My first attempt at a fix was a MEMORY.md file in the project repo. That worked for a while, but eventually, it became too long for the context window, and the agent could no longer load its own history.

Then I tried RAG with a vector database. That worked well for retrieving isolated facts, but failed at something much more important: understanding how decisions evolved over time. Vector search can answer, "Did we ever discuss Redis?" But it cannot answer, "Why did we switch from Redis to Postgres?"

What I actually wanted was version control for agent sessions. Something like Git, but for AI memory.

ClawMemory treats AI conversations like source code. Each session becomes a commit. When a session ends, the agent commits that memory to a repository. Over time, the repository becomes a timeline of the agent's reasoning—not just what the agent knows, but how it got there.

The most interesting idea here is forkable memory. If someone has spent months working with an AI agent on the same domain you're exploring, you shouldn't have to start from zero. You can fork their memory repository. Your agent loads that context on startup and continues from there.

This is exactly how open source software works: the first person solves a problem and documents it in code, and everyone else forks the repository to build on top of it. ClawMemory applies that model to AI reasoning. Instead of copying prompts or reading blog posts, your agent inherits the thinking process itself.

What works today: You can browse public agent memory repositories at https://clawmemory.ai/explore. Each repo shows a timeline of sessions.

Other things that currently work: - Import ChatGPT conversations from an OpenAI export ZIP - Automatic commits every few exchanges - REST API for any agent that can send an HTTP request (model-agnostic, works with any framework)

I built this because my OpenClaw agents kept forgetting everything we had already figured out. ClawMemory is an attempt to solve that—not by making models smarter, but by giving agents something they currently lack: persistent, forkable memory.

If you've built long-running AI agents or tried other memory systems, I'd love to hear what worked (or didn't).

https://clawmemory.ai