HodgeCover: Higher-Order Topological Coverage Drives Compression of Sparse Mixture-of-Experts
arXiv:2605.13997v1 Announce Type: new
Abstract: Sparse Mixture-of-Experts (MoE) layers route tokens through a handful of experts, and learning-free compression of these layers reduces inference cost without retraining. A subtle obstruction blocks ever…