National University of Singapore Presents "DMax": A New Paradigm For Diffusion Language Models (dLLMs) Enabling Aggressive Parallel Decoding.

National University of Singapore Presents "DMax": A New Paradigm For Diffusion Language Models (dLLMs) Enabling Aggressive Parallel Decoding.

TL;DR:

DMax cleverly mitigates error accumulation by reforming decoding as a progressive self-refinement process, allowing the model to correct its own erroneous predictions during generation.


Abstract:

We present DMax, a new paradigm for efficient diffusion language models (dLLMs). It mitigates error accumulation in parallel decoding, enabling aggressive decoding parallelism while preserving generation quality. Unlike conventional masked dLLMs that decode through a binary mask-to-token transition, DMax reformulates decoding as a progressive self-refinement from mask embeddings to token embeddings.

At the core of our approach is On-Policy Uniform Training, a novel training strategy that efficiently unifies masked and uniform dLLMs, equipping the model to recover clean tokens from both masked inputs and its own erroneous predictions. Building on this foundation, we further propose Soft Parallel Decoding. We represent each intermediate decoding state as an interpolation between the predicted token embedding and the mask embedding, enabling iterative self-revising in embedding space.

Extensive experiments across a variety of benchmarks demonstrate the effectiveness of DMax. Compared with the original LLaDA-2.0-mini, our method improves TPF on GSM8K from 2.04 to 5.47 while preserving accuracy. On MBPP, it increases TPF from 2.71 to 5.86 while maintaining comparable performance. On two H200 GPUs, our model achieves an average of 1,338 TPS at batch size 1.


Layman's Explanation:

The core idea is that diffusion language models should be able to generate text faster than normal LLMs because they can fill in multiple tokens at the same time. In practice, though, that speed advantage gets limited because early wrong guesses tend to snowball. Once the model commits to a bad token, that bad token becomes part of the context for the next step, so quality can fall apart fast when decoding gets too aggressive. What DMax does is give the model a better way to recover from its own mistakes. Instead of moving in a rigid one-way path from masked slots to final tokens, it lets the model keep refining intermediate guesses before locking them in.

The paper’s two main ideas are pretty intuitive. First, the model is trained on its own imperfect predictions, so it learns how to clean up the kinds of errors it will actually make at inference time. Second, during decoding it uses a softer in-between representation rather than treating every guess as fully final right away, which helps preserve uncertainty and makes revision easier. The result is that DMax pushes much more parallel decoding without the usual collapse in quality. On the paper’s math and coding benchmarks, it gets large speedups while keeping accuracy close to the original model, and in some lower-parallel settings it even improves accuracy a bit. So the main takeaway is not just “faster diffusion LLMs,” but diffusion LLMs that can revise themselves well enough to make aggressive parallel decoding actually practical.


Link to the Paper: https://arxiv.org/pdf/2604.08302

Link to the GitHub: https://github.com/czg1225/DMax

Link to the Models: https://huggingface.co/collections/Zigeng/dmax-models

Link to the Training Dataset: https://huggingface.co/collections/Zigeng/dmax-training-data
submitted by /u/44th--Hokage
[link] [comments]

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top