LocalLLaMA

What I got by 5060Ti 16GB + Qwen3.6-35B-A3B-UD-Q5_K_M

I tried local model couple weeks ago. At the beginning, I tried Ollama, but reddit says better to switch to llama.ccp. then I switched to llama.ccp prebuild, it was amazing, I was very happy with llama.ccp, speed almostly doubled to run Qwen3.5 9 Q8_K_…