cs.AI, cs.LG

Distillation Traps and Guards: A Calibration Knob for LLM Distillability

arXiv:2604.18963v1 Announce Type: new
Abstract: Knowledge distillation (KD) transfers capabilities from large language models (LLMs) to smaller students, yet it can fail unpredictably and also underpins model leakage risks. Our analysis revealed sever…