ENFORCE: Nonlinear Constrained Learning with Adaptive-depth Neural Projection
arXiv:2502.06774v4 Announce Type: replace
Abstract: Ensuring neural networks adhere to domain-specific constraints is crucial for addressing safety and trustworthiness while also enhancing inference accuracy. Despite the nonlinear nature of most real-world tasks, the majority of existing methods are limited to affine (equality) or convex (inequality) constraints. We introduce ENFORCE, a neural network architecture that uses an adaptive projection module (AdaNP) to enforce nonlinear equality and inequality constraints in the predictions up to a specified tolerance $\varepsilon$, and exactly in the affine-in-$y$ case. For affine constraint sets, we prove that the associated projection mapping is non-expansive (1-Lipschitz), ensuring stable gradient propagation. For nonlinear constraints, we establish local convergence analysis under standard regularity conditions. We evaluate ENFORCE on multiple tasks, including function fitting, real-world engineering case studies, and learning optimization problems. For the latter, we introduce a class of scalable optimization problems as a benchmark for nonlinear constrained learning. In the benchmarks, the predictions of our architecture satisfy nonlinear equality and inequality constraints up to a prescribed tolerance $\varepsilon$, while maintaining scalability with tractable computational complexity at training and inference time.