Cola DLM (Continuous Latent Diffusion Language Model) is a hierarchical continuous latent-space diffusion language model. It combines a Text VAE with a block-causal Diffusion Transformer (DiT) prior: the VAE maps text into continuous latent sequences and decodes latents back to tokens, while the DiT performs latent prior transport through Flow Matching.
This model repository contains the HuggingFace-format checkpoint for the paper Continuous Latent Diffusion Language Model.
Links
Model Details
- Architecture: Text VAE + block-causal DiT latent prior.
- Training objective: two-stage training with Text VAE pretraining followed by joint Text VAE + DiT training using Flow Matching.
- Training-compute checkpoint: the released weights correspond to the 2000 EFLOPs checkpoint reported in the paper's RQ4 scaling curve.
- Tokenizer: OLMo 2 tokenizer with a 100,278-entry vocabulary.
- Special token ids:
pad_token_id=100277, eos_token_id=100257, im_end_token_id=100265. - Framework: PyTorch 2.1+ and HuggingFace Transformers 4.40+.
- License: Apache License 2.0.
submitted by