From Average Sensitivity to Small-Loss Regret Bounds under Random-Order Model

arXiv:2602.09457v2 Announce Type: replace Abstract: We study online learning in the random-order model, where the multiset of loss functions is chosen adversarially but revealed in a uniformly random order. By extending the batch-to-online transformation of Dong and Yoshida (2023), we show that if an offline algorithm enjoys a $(1+\varepsilon)$-approximation guarantee, an average sensitivity bound controlled by a function $\varphi(\varepsilon)$, and stability with respect to $\varepsilon$, then we can obtain a small-loss regret bound typically of order $\tilde O(\varphi^{\star}(\mathrm{OPT}_T))$, where $\varphi^{\star}$ is the concave conjugate of $\varphi$, $\mathrm{OPT}_T$ is the offline optimum over $T$ rounds, and $\tilde O$ hides polylogarithmic factors in $T$. Our result refines their original $(1+\varepsilon)$-approximate regret guarantee and applies to a broad class of problems, including online $k$-means clustering and online low-rank approximation. We further apply our approach to online submodular function minimization using $(1\pm\varepsilon)$-cut sparsifiers of submodular hypergraphs, obtaining a small-loss regret bound of $\tilde O(n^3 + n^{3/4}\mathrm{OPT}_T^{3/4})$, where $n$ is the ground-set size; we also demonstrate its applicability to online $\ell_1$ regression. Our work sheds light on the power of sparsification and related algorithmic techniques in achieving small-loss regret bounds in the random-order model, without requiring structural assumptions on loss functions, such as linearity or smoothness.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top