| I keep hearing the argument that that large models are better for high-level planning and task orchestration, since they have more general knowledge to work from when making decisions. However, I've been testing Qwen 3.6 27b (Unsloth Q5_K_M) quite a lot since its release, and it's consistently outperforming larger models on attention to detail and foresight. SBS comparison attached of Qwen (running in Pi, a lightweight harness that tends to benefit small models) and Sonnet 4.6 (in Claude Code) given the same "plan review" task using identical prompts and `Claude.md` files. Qwen thoroughly explored the code I'd already written, catching significantly more potential issues. It better understood what I'd already built, and how this feature would fit in. Also suggested an efficiency improvement "search_and_read()" to eliminate a round-trip, and new categories to add to the plan. Claude did highlight access control and points about native vs. custom tool parsing, but completely missed the mark understanding how the feature would fit into the existing system -- an odd shortcoming, since it has a dense memory file that it's been filling in for months now. I theorize that Qwen was trained to be less blindly self-confident and spend more time reviewing what currently exists, as token budgets aren't as important with a 27b model. Large models like Claude don't bother to check for token efficiency. Wondering if this stacks up with your experience of the Qwen 3.6 series. [link] [comments] |