Meta-Learning at Scale for Large Language Models via Low-Rank Amortized Bayesian Meta-Learning
arXiv:2508.14285v3 Announce Type: replace
Abstract: Fine-tuning large language models (LLMs) with low-rank adaptation (LoRA) is a cost-effective way to incorporate information from a specific dataset. However, when a problem requires incorporating inf…