LocalLLaMA

Qwen3.6-35B-A3B GGUF from Unsloth is quite a bit slower?

Hi there, first of all I just want to give a huge thanks for Unsloth's tireless work at producing high quality GGUFs and also for their friendly interaction with us here. I'm just running on a CPU-only setup with the latest llama.cpp on Debian …