Comparison Qwen 3.6 35B MoE vs Qwen 3.5 35B MoE on Research Paper to WebApp

Comparison Qwen 3.6 35B MoE vs Qwen 3.5 35B MoE on Research Paper to WebApp

Note: First is Qwen3.5 35B MoE (Left) and Second is Qwen3.6 (Right)

Hi Guys

Just did quick comparison of Qwen3.6 35B MoE against Qwen 3.5 35B MoE. with reasoning off using llama.cpp and same quant unsloth 4 K_XL GGUF

First is Qwen3.5 outcome and second is Qwen3.6

Leaving with you all to judge. I have to do more experiments before concluding anything.

I have used same skills that I created using qwen3.5 35B before.
statisticalplumber/research-webapp-skill

u/echo off title Llama Server :: Set the model path set MODEL_PATH=C:\Users\Xyane\.lmstudio\models\unsloth\Qwen3.6-35B-A3B-GGUF\Qwen3.6-35B-A3B-UD-Q4_K_XL.gguf echo Starting Llama Server... echo Model: %MODEL_PATH% llama-server.exe -m "%MODEL_PATH%" --chat-template-kwargs "{\"enable_thinking\": false}" --jinja -fit on -c 90000 -b 4096 -ub 1024 --reasoning off --presence-penalty 1.5 --repeat-penalty 1.0 --temp 0.6 --top-p 0.95 --min-p 0.0 --top-k 20 --keep 1024 -np 1 if %ERRORLEVEL% NEQ 0 ( echo. echo [ERROR] Llama server exited with error code %ERRORLEVEL% pause ) 
submitted by /u/dreamai87
[link] [comments]

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top