LocalLLaMA

Finetuned a 270M model on CPU only – full weights, no LoRA, no GPU

Finetuned Gemma 3 270m on CPU only – full weights, no LoRA, no GPU, no cloud compute. ms-swift and a few minutes of patience. Small absurd dataset deliberately to make verification trivial: if the model outputs exactly what wasn't in its pretrainin…