Structure-Preserving Reconstruction of Convex Lipschitz Functionals on Hilbert Spaces from Finite Samples

arXiv:2605.08559v1 Announce Type: cross Abstract: Convex functionals are ubiquitous in applied analysis, appearing as value functions, risk measures, super-hedging prices, and loss functionals in machine learning. In many applications, however, the functional is only observed through finitely many exact pointwise evaluations. We ask whether a convex functional on a separable Hilbert space $H$ can be reconstructed, up to arbitrary uniform accuracy, by an explicit formula which preserves convexity and Lipschitz regularity and is finitely computable. We answer this affirmatively. For every compact convex $C\subseteq H$, every $L$-Lipschitz convex functional $\rho:C\to\mathbb{R}$, and every $\varepsilon>0$, we construct an explicit finite-sample reconstruction which is convex, $L$-Lipschitz, and uniformly $\varepsilon$-accurate on $C$. The construction uses only finitely many linear measurements $\langle b,\cdot\rangle_H$, with $b$ lying in a finite-dimensional subspace of $H$, and is exactly implementable by a $\operatorname{ReLU}$-MLP. Building on this, we introduce convex neural functionals (CNFs), a structured trainable architecture class containing our reconstruction, whose every admissible parameter configuration is automatically convex and Lipschitz, providing a principled foundation for learning convex functionals from finite data.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top