Optimizer-Model Consistency: Full Finetuning with the Same Optimizer as Pretraining Forgets Less
arXiv:2605.06654v1 Announce Type: cross
Abstract: Optimizers play an important role in both pretraining and finetuning stages when training large language models (LLMs). In this paper, we present an observation that full finetuning with the same optim…