cs.AI, cs.LG, math.OC

SGD at the Edge of Stability: The Stochastic Sharpness Gap

arXiv:2604.21016v1 Announce Type: cross
Abstract: When training neural networks with full-batch gradient descent (GD) and step size $\eta$, the largest eigenvalue of the Hessian — the sharpness $S(\boldsymbol{\theta})$ — rises to $2/\eta$ and hovers…