Tightening convex relaxations of trained neural networks: a unified approach for convex and S-shaped activations
arXiv:2410.23362v2 Announce Type: replace-cross
Abstract: The non-convex nature of trained neural networks has created significant obstacles in their incorporation into optimization models. In this context, Anderson et al. (2020) provided a framework to obtain the convex hull of the graph of a piecewise linear convex activation function composed with an affine function; this effectively convexifies activations such as the ReLU together with the affine transformation that precedes it. In this article, we contribute to this line of work by developing a recursive formula that yields a tight convexification for the composition of an activation with an affine function for a wide scope of activation functions, namely, convex or ``S-shaped". Our approach can be used to efficiently compute separating hyperplanes or determine that none exists in various settings, including non-polyhedral cases.