Label-Free Cross-Task LoRA Merging with Null-Space Compression
arXiv:2603.26317v1 Announce Type: new
Abstract: Model merging combines independently fine-tuned checkpoints without joint multi-task training. In the era of foundation-model, fine-tuning with Low-Rank Adaptation (LoRA) is prevalent, making LoRA mergin…