Decocted Experience Improves Test-Time Inference in LLM Agents
arXiv:2604.04373v1 Announce Type: new
Abstract: There is growing interest in improving LLMs without updating model parameters. One well-established direction is test-time scaling, where increased inference-time computation (e.g., longer reasoning, sam…