Running a 26B LLM locally with no GPU

This is crazy. I've been running local LLMs on CPU only for awhile now and have great results with 12B models running on an i5-8500 and only 32GB of RAM with no GPU. But I've got a version of Gemma4 26B running really fast on the same machine which isn't even breaking a sweat.

It is simply amazing what can run without a GPU.

submitted by /u/JackStrawWitchita
[link] [comments]

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top