Establishing a Scale for Kullback-Leibler Divergence in Language Models Across Various Settings

arXiv:2505.15353v3 Announce Type: replace Abstract: Log-likelihood vectors define a common space for comparing language models as probability distributions, enabling unified comparisons across heterogeneous settings. We extend this framework to training checkpoints and intermediate layers, and establish a consistent scale for KL divergence across pretraining, model size, random seeds, quantization, fine-tuning, and layers. Analysis of Pythia pretraining trajectories further shows that changes in log-likelihood space, as measured by the scaling behavior of KL divergence, are much smaller than in weight space, resulting in subdiffusive learning trajectories and early stabilization of language-model behavior despite weight drift.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top