MLorc: Momentum Low-rank Compression for Memory Efficient Large Language Model Adaptation
arXiv:2506.01897v4 Announce Type: replace
Abstract: With increasing size of large language models (LLMs), full-parameter fine-tuning imposes substantial memory demands. To alleviate this, we propose a novel memory-efficient training paradigm called Mo…