Principled Latent Diffusion for Graphs via Laplacian Autoencoders
arXiv:2601.13780v3 Announce Type: replace
Abstract: Graph diffusion models achieve state-of-the-art performance in graph generation but suffer from quadratic complexity in the number of nodes -- and much of their capacity is wasted modeling the absence of edges in sparse graphs. Inspired by latent diffusion in other modalities, a natural idea is to compress graphs into a low-dimensional latent space and perform diffusion in that space. However, unlike images or text, graph generation requires nearly lossless reconstruction, as even a single error in decoding an adjacency matrix can render the entire sample invalid. This challenge has remained largely unaddressed. We propose LG-Flow, a latent graph diffusion framework that directly overcomes these obstacles. A permutation-equivariant autoencoder maps nodes to fixed-dimensional embeddings that enable near-lossless reconstruction of both undirected graphs and DAGs. The dimensionality of this latent representation scales linearly with the number of nodes, thereby removing the quadratic adjacency-space bottleneck in the diffusion process and enabling the training of substantially larger generative backbones. In this latent space, we train a Diffusion Transformer with flow matching, enabling efficient and expressive graph generation. Our approach achieves competitive results against state-of-the-art graph diffusion models while delivering up to a $1000\times$ speed-up. Our code is available at https://github.com/asiraudin/LG-Flow .