SAMoRA: Semantic-Aware Mixture of LoRA Experts for Task-Adaptive Learning
arXiv:2604.19048v1 Announce Type: cross
Abstract: The combination of Mixture-of-Experts (MoE) and Low-Rank Adaptation (LoRA) has shown significant potential for enhancing the multi-task learning capabilities of Large Language Models. However, existing…