Position: Adopt Constraints Over Fixed Penalties in Deep Learning

arXiv:2505.20628v4 Announce Type: replace Abstract: Recent efforts to develop trustworthy AI systems have increased interest in learning problems with explicit requirements, or constraints. In deep learning, however, such problems are often handled through fixed weighted-sum penalization: the constraints are added to the task loss with fixed coefficients, and the resulting scalarized objective is minimized. This position paper argues that fixed penalization is often ill-suited for deep learning problems with non-negotiable requirements for several reasons. First, in non-convex settings, the penalized and constrained problems are generally not equivalent, so solving the former need not solve the latter. Second, fixed penalization weakens hard requirements into soft penalties to be traded off against task performance. Third, choosing penalty coefficients to indirectly solve the constrained problem often involves costly trial and error, because changing them alters the penalized objective itself, and hence can mean solving the wrong problem altogether. We therefore argue that, when a deep learning problem specifies non-negotiable requirements, the constrained formulation itself should be the starting point, not the surrogate problem defined by fixed penalization. The appropriate solution strategy should then be chosen based on the problem's structure and scale.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top