Efficient Training on Multiple Consumer GPUs with RoundPipe
arXiv:2604.27085v1 Announce Type: cross
Abstract: Fine-tuning Large Language Models (LLMs) on consumer-grade GPUs is highly cost-effective, yet constrained by limited GPU memory and slow PCIe interconnects. Pipeline parallelism combined with CPU offlo…