PDGMM-VAE: A Variational Autoencoder with Adaptive Per-Dimension Gaussian Mixture Model Priors for Nonlinear ICA
arXiv:2603.23547v2 Announce Type: replace
Abstract: Independent component analysis is a core framework within blind source separation for recovering latent source signals from observed mixtures under statistical independence assumptions. In this work, we propose PDGMM-VAE, a source-oriented variational autoencoder in which each latent dimension, interpreted explicitly as an individual source component, is assigned its own adaptive Gaussian mixture model prior. The proposed framework imposes heterogeneous per-dimension prior constraints, enabling different latent dimensions to model different non-Gaussian source marginals within a unified probabilistic encoder-decoder architecture. The parameters of these source-specific GMM priors are not fixed in advance, but are jointly learned together with the encoder and decoder under the overall training objective. Beyond the model construction itself, we provide a theoretical analysis clarifying why adaptive per-dimension prior design is meaningful in this setting. In particular, we show that heterogeneous per-dimension priors reduce latent permutation symmetry relative to homogeneous shared priors, and we further show that the KL regularization induced by the adaptive GMM prior creates source-specific attraction behavior that helps explain source-wise specialization during training. We also clarify the relation of the proposed model to the standard VAE and provide a weak recovery statement in an idealized linear low-noise regime. Experimental results on both linear and nonlinear mixing problems show that PDGMM-VAE can recover latent source signals and fit source-specific non-Gaussian marginals effectively. These results suggest that adaptive per-dimension mixture-prior design provides a principled and promising direction for VAE-based ICA and source-oriented generative modeling.