cs.AI

Reasoning Compression with Mixed-Policy Distillation

arXiv:2605.08776v1 Announce Type: new
Abstract: Reasoning-centric large language models (LLMs) achieve strong performance by generating intermediate reasoning trajectories, but often incur excessive token usage and high inference-time decoding cost. W…