LocalLLaMA

Qwen3.6 (35B-A3B) with OpenCode. Running locally with llama.cpp

Let's test how good of a coding model Qwen3.6 really is using the OpenCode harness: https://www.youtube.com/live/3UJFADzV0OY submitted by /u/curiousily_ [link] [comments]