cs.LG

Reflective Prompted Policy Optimization: Trajectory-Grounded Revision and Salience Bias

arXiv:2605.08315v1 Announce Type: new
Abstract: Existing LLM-based policy optimizers see only scalar rewards: that a policy scored 0.45, but not whether the agent got stuck in a loop, fell into a hole on the third step, or performed well on 19 out of …