DR-LoRA: Dynamic Rank LoRA for Fine-Tuning Mixture-of-Experts Models
arXiv:2601.04823v5 Announce Type: replace-cross
Abstract: Mixture-of-Experts (MoE) has become a prominent paradigm for scaling Large Language Models (LLMs). Parameter-efficient fine-tuning methods, such as LoRA, are widely adopted to adapt pretrained …