Design and Behavior of Sparse Mixture-of-Experts Layers in CNN-based Semantic Segmentation
arXiv:2604.13761v1 Announce Type: cross
Abstract: Sparse mixture-of-experts (MoE) layers have been shown to substantially increase model capacity without a proportional increase in computational cost and are widely used in transformer architectures, w…