| Every LLM conversation starts from zero. RAG helps, but it can't learn from what's happening right now. MDA is my attempt to fix that. MDA encodes knowledge as associative entity networks, updates in real-time via the Oja rule (no backprop, no reindexing), and retrieves context by activating concept graphs rather than similarity search. Runs CPU-first, model-agnostic, works with Ollama/OpenAI/Anthropic out of the box. Available as an MCP server, with GPU acceleration support for batch workloads. One thing I find genuinely interesting: multiple agents can share the same MDA instance and reason over a common memory. Agent A learns something, Agent B picks it up through associative traversal, not by searching for it, but because the concept network connects them. It starts to feel less like retrieval and more like shared intuition. On the benchmark: these numbers come from synthetic questions I wrote myself, not a community-constructed eval. Take them as directional, not definitive. MDA is not a RAG killer the goal is to cover what RAG and LLMs leave on the table, not replace them. If you run your own tests and find different results, I'd genuinely want to hear it.
Inference model: Qwen3 6-35B-A3B / Judge model: Claude Haiku If you'd like to share what you think MDA does well or where it falls short, I'd love to hear it. Source code: https://github.com/rangle2/mda [link] [comments] |