| I built an open source memory layer for AI agents called Octopoda. Runs entirely locally, no cloud, no API keys, no external services. Everything stays on your machine. The problem is pretty simple. Agents forget everything between sessions. Every time you restart your agent it starts from scratch like you never talked to it. I kept building hacky workarounds for this so eventually I just built a proper solution. It gives your agents persistent memory that survives restarts and crashes, semantic search so they can find memories by meaning not just exact keys, loop detection that catches when an agent is stuck doing the same thing over and over, messaging between agents so they can actually coordinate, crash recovery with snapshots you can roll back to, version history on every memory so you can see exactly how your agents knowledge changed over time, and shared memory spaces so multiple agents can work from the same knowledge base. It also has Ollama integration for fact extraction if you want smarter memory, and semantic search runs locally with a small 33MB embedding model on CPU. So the whole stack can run completely offline on your own hardware which I know matters to people here. There's integrations for LangChain CrewAI AutoGen and OpenAI Agents SDK, and an MCP server with 25 tools if you use Claude or Cursor. MIT licensed, been getting some great feedback today from other subs and would really love to hear what this community thinks. What would make this actually useful for your local setups? [link] [comments] |