https://preview.redd.it/nxn2rr15vqvg1.png?width=1920&format=png&auto=webp&s=8ec85d90b1286a6e7813c91a0a83c748e94ca849
I gave it a task to build a tower defense game. use screenshots from the installed mcp to confirm your build.
My God its actually doing it, Its now testing the upgrade feature,
It noted the canvas wasnt rendering at some point and saw and fixed it.
It noted its own bug in wave completions and is actually doing it...
I am blown away...
I cant image what the Qwen Coder thats following will be able to do.
What a time were in.
llama-server -m "{PATH_TO_MODEL}\Qwen3.6\Qwen3.6-35B-A3B-UD-Q6_K_XL.gguf" --mmproj "{PATH_TO_MODEL}\Qwen3.6\mmproj-F16.gguf" --chat-template-file "{PATH_TO_MODEL}\chat_template\chat_template.jinja" -a "Qwen3.5-27B" --cpu-moe -c 120384 --host 0.0.0.0 --port 8084 --reasoning-budget -1 --top-k 20 --top-p 0.95 --min-p 0 --repeat-penalty 1.0 --presence-penalty 1.5 -fa on --temp 0.7 --no-mmap --no-mmproj-offload --ctx-checkpoints 5"
EDIT: Its been made aware that open code still has my 27B model alias,
Im lazy, i didnt even bother the model name heres my llama.cpp server configs, im so excited i tested and came here right away.
submitted by