I’m running qwen3.6-35b-a3b with 8 bit quant and 64k context thru OpenCode on my mbp m5 max 128gb and it’s as good as claude

of course this is just a trust me bro post but I've been testing various local models (a couple gemma4s, qwen3 coder next, nemotron) and I noticed the new qwen3.6 show up on LM Studio so I hooked it up.

VERY impressed. It's super fast to respond, handles long research tasks with many tool calls (I had it investigate why R8 was breaking some serialization across an Android app), responses are on point. I think it will be my daily driver (prior was Kimi k2.5 via OpenCode zen).

FeelsGoodman, no more sending my codebase to rando providers and "trusting" them.

submitted by /u/Medical_Lengthiness6
[link] [comments]

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top