Implicit Hypothesis Testing and Divergence Preservation in Neural Network Representations

arXiv:2601.20477v3 Announce Type: replace Abstract: We study the training dynamics of neural classifiers through the lens of binary hypothesis testing. We re-formalize classification as a collection of binary tests between class-conditional distributions induced by learned representations and show empirically that, along training trajectories, well-generalizing networks progressively approach Neyman-Pearson optimal decision rules, as measured by monotonic growth in the KL divergence retained by learned representations. We provide sufficient conditions for exact optimality, discuss its implications for training regularization, and define an informational plane, (so-called Evidence-Error plane) where convergence can be assessed methodically across network architecture.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top