cs.AI, cs.CL, cs.LG

Rethinking Adapter Placement: A Dominant Adaptation Module Perspective

arXiv:2605.06183v1 Announce Type: cross
Abstract: Low-rank adaptation (LoRA) is a widely used parameter-efficient fine-tuning method that places trainable low-rank adapters into frozen pre-trained models. Recent studies show that using fewer LoRA adap…