Revisiting GAN with Bayes-Optimal Discrimination

arXiv:2510.25609v3 Announce Type: replace Abstract: We propose an alternative to the standard GAN training approach, in which the discriminator is a binary classifier trained by cross-entropy to distinguish real samples from generated ones. Instead, we directly target the discrimination Bayes error rate (BER). To this end, we use the recently proposed Bayes optimal learning threshold (BOLT) loss and train the generator to maximize a surrogate of the discrimination BER. This viewpoint gives a unified perspective on GAN training: different objectives can be interpreted as parameterized bounds on the discrimination BER that describe a trade-off between smoothness and tightness. We show that, under balanced class priors, maximizing the surrogate BER with an unconstrained discriminator minimizes the total variation between the data and generator distributions. By constraining the discriminator to be $1$-Lipschitz, the proposed maximization objective defines a discrepancy that is upper-bounded by the Wasserstein-1 distance, thereby linking it to Wasserstein GAN. Experiments on several image-generation datasets under matched architectures and optimization settings show that GAN training using the surrogate BER improves sample quality and coverage over standard baselines. This analysis suggests that the proposed Bayesian viewpoint can achieve a better trade-off between training stability and convergence of the generator to the data distribution.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top