The problem I kept hitting: every LLM call starts from zero.
Ask it to help with your project and it suggests Postgres when you committed
to JSON files. It recommends langchain when you explicitly banned it. It
proposes rebuilding a module you decided to extend six months ago.
The usual fix is manually pasting context into every prompt. That doesn't
scale and drifts the moment anyone forgets to update it.
So I built Mneme.
---
**What it does**
You define your project memory once — rules, constraints, architecture
decisions, anti-patterns — in a plain JSON file.
Mneme retrieves the relevant items for each query and injects them as
structured system context before the LLM call.
**Without Mneme:**
> "We could rebuild using a vector database and sentence-transformers.
> This would improve semantic matching long-term..."
**With Mneme:**
> "Do not rebuild from scratch. rule-001: extend current infrastructure
> before rebuilding. The team already declined sentence-transformers —
> heavy ML dependency, breaks the pip-install contract. Extend the
> current retriever instead."
Same model. Same question. Different answer — because it has your actual
decisions.
---
**How it works**
Five stages:
Load memory from a JSON file
Deterministic retrieval (keyword + tag scoring, no embeddings)
Build a structured context packet
Inject as system prompt
Optional: score the response against the injected rules
No vector database. No long context windows. No agent loops.
The goal isn't to give the model more information — it's to make it
respect prior decisions.
---
**API layer**
There's a minimal FastAPI endpoint if you want to call it from another
workflow:
POST /complete
{ "question": "Should we rebuild?", "memory": "project_memory.json" }
Returns the answer + a summary of what context was injected.
---
It's early and intentionally narrow. The v1 goal was: prove the loop
works, keep it pip-installable in under 30 seconds.
Repo: https://github.com/TheoV823/mneme
Happy to answer questions about the retrieval approach or the evaluator.
[link] [comments]