cs.LG, math.ST, stat.TH

Super-fast Rates of Convergence for Neural Network Classifiers under the Hard Margin Condition

arXiv:2505.08262v2 Announce Type: replace
Abstract: We study the classical binary classification problem for hypothesis spaces of Deep Neural Networks (DNNs) under Tsybakov’s low-noise condition with exponent $q>0$, as well as its limit case $q=\infty$, which we refer to as the \emph{hard margin condition}. We demonstrate that, for a wide range of commonly used activation functions (including but not limited to ReLU, LeakyReLU, ELU, CELU, SELU, Softplus, GELU, SiLU, Swish, Mish, and Softmax), DNN solutions to the empirical risk minimization (ERM) problem with square loss surrogate and $\ell_p$ penalty on the weights $(0

1$ under the hard-margin condition, provided that the Bayes regression function $\eta$ satisfies a \emph{distribution-adapted smoothness} condition relative to the marginal data distribution $\rho_{X}$. Furthermore, when the activation function is chosen as $\tanh$ or sigmoid, we show that the same rates follow from the standard assumption that $\eta\in \mathcal{C}^s$. Finally, we establish minimax lower bounds, showing that these rates cannot be improved upon whenever $q\ge2$. Our proof relies on a novel decomposition of the excess risk for general ERM-based classifiers which might be of independent interest.