Graph-Informed Adversarial Modeling: Infimal Subadditivity of Interpolative Divergences

arXiv:2603.20025v2 Announce Type: replace-cross Abstract: We study adversarial learning when the target distribution factorizes according to a known Bayesian network. For interpolative divergences, including $(f,\Gamma)$-divergences, we prove a new infimal subadditivity principle showing that, under suitable conditions, a global variational discrepancy is controlled by an average of family-level discrepancies aligned with the graph. In an additive regime, the surrogate is exact. This closes a theoretical gap in the literature; existing subadditivity results justify graph-informed adversarial learning for classical discrepancies, but not for interpolative divergences, where the usual factorization argument breaks down. In turn, we provide a justification for replacing a standard, graph-agnostic GAN with a monolithic discriminator by a graph-informed GAN (GiGAN) with localized family-level discriminators, without requiring the optimizer itself to factorize according to the graph. We also obtain parallel results for integral probability metrics and proximal optimal transport divergences, identify natural discriminator classes for which the theory applies, and present experiments showing improved stability and structural recovery relative to graph-agnostic baselines.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top