Robust Learning with Optimal Error

arXiv:2604.02555v1 Announce Type: cross Abstract: We construct algorithms with optimal error for learning with adversarial noise. The overarching theme of this work is that the use of \textsl{randomized} hypotheses can substantially improve upon the best error rates achievable with deterministic hypotheses. - For $\eta$-rate malicious noise, we show the optimal error is $\frac{1}{2} \cdot \eta/(1-\eta)$, improving on the optimal error of deterministic hypotheses by a factor of $1/2$. This answers an open question of Cesa-Bianchi et al. (JACM 1999) who showed randomness can improve error by a factor of $6/7$. - For $\eta$-rate nasty noise, we show the optimal error is $\frac{3}{2} \cdot \eta$ for distribution-independent learners and $\eta$ for fixed-distribution learners, both improving upon the optimal $2 \eta$ error of deterministic hypotheses. This closes a gap first noted by Bshouty et al. (Theoretical Computer Science 2002) when they introduced nasty noise and reiterated in the recent works of Klivans et al. (NeurIPS 2025) and Blanc et al. (SODA 2026). - For $\eta$-rate agnostic noise and the closely related nasty classification noise model, we show the optimal error is $\eta$, improving upon the optimal $2\eta$ error of deterministic hypotheses. All of our learners have sample complexity linear in the VC-dimension of the concept class and polynomial in the inverse excess error. All except for the fixed-distribution nasty noise learner are time efficient given access to an oracle for empirical risk minimization.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top