MoE-GRPO: Optimizing Mixture-of-Experts via Reinforcement Learning in Vision-Language Models
arXiv:2603.24984v2 Announce Type: replace
Abstract: Mixture-of-Experts (MoE) has emerged as an effective approach to reduce the computational overhead of Transformer architectures by sparsely activating a subset of parameters for each token while pres…