Adversarial Robustness of NTK Neural Networks

arXiv:2604.25965v1 Announce Type: new Abstract: Deep learning models are widely deployed in safety-critical domains, but remain vulnerable to adversarial attacks. In this paper, we study the adversarial robustness of NTK neural networks in the context of nonparametric regression. We establish minimax optimal rates for adversarial regression in Sobolev spaces and then show that NTK neural networks, trained via gradient flow with early stopping, can achieve this optimal rate. However, in the overfitting regime, we prove that the minimum norm interpolant is vulnerable to adversarial perturbations.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top