cs.AI, cs.LG

Diversity-Aware Reverse Kullback-Leibler Divergence for Large Language Model Distillation

arXiv:2604.00223v1 Announce Type: cross
Abstract: Reverse Kullback-Leibler (RKL) divergence has recently emerged as the preferred objective for large language model (LLM) distillation, consistently outperforming forward KL (FKL), particularly in regim…