Research workflow question for people who work with papers, docs, or long-running investigations:
A lot of AI tooling still feels great at producing answers and bad at preserving understanding. You upload a pile of material, ask a few questions, get decent output, and then the next session starts from near-zero again.
What seems more interesting is compiling raw sources into a persistent markdown wiki that keeps structure, links concepts together, and gets better when useful answers are saved back into it.
That is why AtomicMem / llm-wiki-compiler caught my attention. It feels less like 'chat over documents' and more like building an actual knowledge artifact you can keep working against.
Repo if useful: https://github.com/atomicmemory/llm-wiki-compiler
Curious whether anyone here has tried this approach for research workflows, literature review, or team memory.
[link] [comments]