A Little Rank Goes a Long Way: Random Scaffolds with LoRA Adapters Are All You Need
arXiv:2604.08749v2 Announce Type: replace
Abstract: How many of a neural network’s parameters actually encode task-specific information? We investigate this question with LottaLoRA, a training paradigm in which every backbone weight is drawn at random…