Best config for Qwen3.6 27b / llama.cpp / opencode
Please share your best config <3 llama.ccp: "A:/0_llama_server/llama-server.exe" -m "a:\0_LM_Studio\unsloth\Qwen3.6-27B-GGUF\Qwen3.6-27B-UD-Q5_K_XL.gguf" –port 8080 –alias qwen3.5:27b -ngl 999 –threads 22 –flash-attn on –hos…