Most agent systems treat memory as advisory.
The agent reads past context, but nothing forces it to follow it.
So memory becomes a suggestion, not a constraint.
I’ve been thinking about the opposite direction:
what if memory was enforced at the system level?
Not “the agent remembers X”
but “the system rejects actions that contradict X”
For example:
memory says: user preference = vegetarian
agent suggests: order chicken
In most systems → allowed
In this model → rejected
So memory stops being context and becomes part of system state.
Curious how people think about this:
Should memory influence behavior, or constrain it?
[link] [comments]