LocalLLaMA

Is it normal for Gemma 4 26B/31B to run this fast on an Intel laptop? (288V / CachyOS)

Hey everyone, I just got into local LLMs about a week ago. I tried Ollama and LMStudio on my Core Ultra 9 288V, but they kept failing or giving me "hard stops" on the MoE models, so I figured I’d just try building the environment myself…