Compress Then Adapt? No, Do It Together via Task-aware Union of Subspaces
arXiv:2605.02829v1 Announce Type: new
Abstract: Adapting large pretrained models to diverse tasks is now routine, yet the two dominant strategies of parameter-efficient fine-tuning (PEFT) and low-rank compression are typically composed in sequence. Th…