Rethinking Parameter Sharing for LLM Fine-Tuning with Multiple LoRAs
arXiv:2509.25414v2 Announce Type: replace-cross
Abstract: Large language models are often adapted using parameter-efficient techniques such as Low-Rank Adaptation (LoRA), formulated as $y = W_0x + BAx$, where $W_0$ is the pre-trained parameters and $x…