cs.AI, cs.CL, cs.LG

Revisiting On-Policy Distillation: Empirical Failure Modes and Simple Fixes

arXiv:2603.25562v1 Announce Type: cross
Abstract: On-policy distillation (OPD) is appealing for large language model (LLM) post-training because it evaluates teacher feedback on student-generated rollouts rather than fixed teacher traces. In long-hori…