[P] Affect-tagged recall and token-economic gating for persistent LLM agent memory

We implemented two mechanisms for managing long-term agent memory in production LLM systems that may interest this community.

1. Affect-tagged recall

Each memory entry carries a valence field (float, -1.0 to +1.0) assigned at write time based on the interaction outcome. User corrections receive negative valence; successful task completions receive positive. During retrieval, valence modulates the ranking score alongside semantic similarity -- creating an implicit reinforcement signal over the memory graph without requiring explicit reward modeling.

This produces emergent behavioral adaptation: the agent gradually de-prioritizes patterns associated with past failures without hard rules. The valence acts as a soft retrieval bias, not a filter, so semantically strong matches still surface regardless of affect.

2. Token-economic gating with surprisal

Every memory write operation draws from a per-session cognitive token budget. A surprisal gate estimates information novelty against the existing memory corpus. High-novelty observations pass at base cost. Redundant or near-duplicate content is taxed at 2x. This creates economic pressure toward information compression and forces the agent to prioritize genuinely novel observations over verbose logging.

The surprisal computation uses TurboQuant-inspired quantization (int8 compressed embeddings) for efficient corpus comparison with 95%+ recall accuracy vs. float32 baselines.

Additional architectural components:

  • Multi-hop graph retrieval with ACT-R inspired spreading activation over the memory graph (vs. flat cosine similarity)
  • Adversarial self-improvement pipelines for memory quality validation
  • Role-scoped memory coordination for multi-agent systems
  • HDC (hyperdimensional computing) vectors for cognitive state routing

Implementation is pure TypeScript, runs locally via SQLite or Supabase, protocol-agnostic via MCP (Model Context Protocol). 1,151 tests. MIT licensed.

GitHub: https://github.com/dcostenco/prism-mcp

Looking for feedback on the affect-tagged recall approach. Current open questions: optimal valence decay functions over time, whether valence should propagate through graph edges, and whether surprisal gating generalizes beyond code-domain memories.

submitted by /u/dco44
[link] [comments]

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top