Most LLM agents treat the model as the entire cognitive system. System prompt defines personality, RAG handles memory, chain-of-thought handles planning. It works until it doesn't, and when it breaks, there's no structural theory to debug against.
This book takes a different approach: treat the LLM as a translation layer and build the actual cognitive architecture around it. Memory with Ebbinghaus forgetting curves and reconstructive distortion. Emotion using OCC appraisal models and PAD mood space. Decision-making through GOAP planners perturbed by prospect theory. Personality as system-wide parameter modulation with drift detection.
The underlying research comes from three fields that rarely cross-reference each other — cognitive science (ACT-R, CLARION, LIDA), game AI (The Sims autonomy system, Dwarf Fortress personality modeling, Halo behavior trees), and LLM agent engineering. 15 chapters, 120+ citations, working Python/JS code throughout. Free on GitHub.
This is a synthesis of existing research with working implementations, so I'd genuinely appreciate feedback on the substance; what's wrong, what's missing, and what doesn't hold up.
[link] [comments]