| I have been nothing but impressed by the quality of Gemma 4 since release. In general conversation it's adaptable to different personas. For maths and reasoning it's great. It doesn't spend too long thinking unless you tell it to. But its coding ability honestly leaves me struggling to grasp that this is only 31b parameters A small test I've done recently is giving the model an image and asking for a 3D model of the image. It's not a simple image (an F1 car) so I didn't expect miracles. For instance here is Claude Sonnet 4.6: There's some complex geometry in there and the presentation is cool. But there are some absurd anomalies Gemini 3.1 Pro was cruder but less broken: ChatGPT was `not just bad, it was Ferrari 2012 bad`: Moving on to local models, the previous and for some current darling of local models, Qwen3.5 27b at Q8 took 6800 tokens to deliver this: But in just 3600 tokens, Gemma 4 31b produced this: [link] [comments] |