I open sourced a local-first LLM wiki for research and durable memory

I’ve been building a small tool called oamc around a workflow I wanted for personal research and long-running project memory.

The basic idea is: instead of repeatedly querying raw notes/documents, sources get ingested into a maintained markdown wiki. The wiki becomes the working knowledge layer, and future questions are asked against that layer instead of against raw text every time.

The pipeline is:

  • drop or clip sources into an inbox
  • ingest them into source, concept, entity, and synthesis pages
  • ask questions against the wiki
  • save useful answers back as new synthesis pages

A few things I cared about:

  • local-first workflow
  • markdown as the actual knowledge layer
  • inspectable files instead of hidden memory
  • lighter than standing up a full RAG stack
  • works well with Obsidian, but doesn’t depend on it conceptually

There’s also a small local dashboard and a macOS menubar app so it can keep running in the background.

This was inspired by Andrej Karpathy’s “LLM Wiki” idea. I was basically trying to turn that pattern into something I’d genuinely use day to day.

Repo:
https://github.com/michiosw/oamc

I’d especially love feedback from people here on:

  • wiki-first vs RAG-first for personal knowledge
  • where this approach starts breaking down at scale
  • whether markdown artifacts are actually a better interface for long-term LLM memory than embeddings + retrieval alone
submitted by /u/Sad_Light_1354
[link] [comments]

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top