Lightning OPD: Efficient Post-Training for Large Reasoning Models with Offline On-Policy Distillation
arXiv:2604.13010v1 Announce Type: cross
Abstract: On-policy distillation (OPD) has emerged as an efficient post-training paradigm for large language models. However, standard OPD requires a live teacher inference server throughout training, resulting …