The Illusion of Certainty: Decoupling Capability and Calibration in On-Policy Distillation
arXiv:2604.16830v1 Announce Type: new
Abstract: On-policy distillation (OPD) is an increasingly important paradigm for post-training language models. However, we identify a pervasive Scaling Law of Miscalibration: while OPD effectively improves task a…