LocalLLaMA

Share your llama-server init strings for Gemma 4 models.

Hi. I'm trying to use llama.cpp to give me workable Gemma 4 inference, but I'm not finding anything that works. I'm using the latest llama.cpp, but I've tested it now on three versions. I thought it might just require me waiting until l…