We’re RooAGI, and we built Lint-AI, a Rust CLI for indexing and retrieving evidence from large corpora of AI-generated documentation.
As AI systems produce more task notes, traces, reports, and decisions, the problem is no longer just storing the documents. The harder part is finding the right pieces of evidence when the same concept is described in multiple places, often with different terminology or different framing.
Lint-AI is our current retrieval layer for that problem.
What it does today:
indexes large documentation corpora.
extracts lightweight entities and important terms.
supports hybrid retrieval using lexical, entity, term, and graph-aware scoring.
returns chunk-level evidence with --llm-context for downstream reviewer or LLM use.
*exports doc, chunk, and entity graphs
Example:
./lint-ai /path/to/docs --llm-context "where docs describe the same concept differently" --result-count 8 --simplified
That command does not decide whether documents are in contradiction. It retrieves the most relevant chunks so a reviewer layer can compare them.
Repo: https://github.com/RooAGI/Lint-AI
We’d appreciate feedback on:
retrieval/ranking design for documentation corpora
how to evaluate evidence retrieval quality for alignment workflows
what kinds of entity/relationship modeling would actually be useful here
Comments URL: https://news.ycombinator.com/item?id=47756584
Points: 1
# Comments: 0