Entropy Aware Reward Guidance for Diffusion Language Model Alignment
arXiv:2602.05000v2 Announce Type: replace-cross
Abstract: Reward guidance, also known as posterior sampling, is a popular method for test-time adaptation and post-training in continuous diffusion models. In this paper, we study reward guidance for discrete diffusion language models; now, one cannot differentiate through the natural outputs of the model because they are discrete tokens. We introduce a novel mechanism called EntRGi (Entropy aware Reward Guidance) to address this issue. EntRGi dynamically interpolates between continuous token relaxations and sampled hard tokens, on a token-by-token basis, using the diffusion model's predictive entropy. We demonstrate that EntRGi maintains both reward model reliability and optimization accuracy, while existing approaches sacrifice one for the other. We empirically validate our approach on 7B-parameter diffusion language models across two settings: (1) test-time adaptation, and (2) RGRL (Reward Guided Reinforcement Learning), our recipe for post-training on reward-guided data, showing consistent improvements over state-of-the-art methods. Our code is available at https://atutej.github.io/entrgi-rgrl