LocalLLaMA

Devs using Qwen 27B seriously, what’s your take?

For developers using Qwen 27B for coding, Codex style: what's your honest take? So far, for me, it's been pretty solid. Not always amazing, but honestly neither is GPT-5.5 sometimes. Considering the model size, it's kind of wild how capable…