Hi guys,
Back again. I have tested the Qwen 3.6 UD 2 K_XL Unsloth model on the same paper to web app task. The model is performing very well. It handled all tool calls properly and also managed large context using llama.cpp on a 16GB VRAM on laptop.
I have attached all details
total tool calls were 58,
with a success rate of 98.3%.
The model also processed around 2.7 million tokens while building the app from the given paper.
You can test this model using the same skills I created earlier with the Qwen 35B model
statisticalplumber/research-webapp-skill
u/echo off title Llama Server - Gemma 4 :: Set the model path set MODEL_PATH=C:\Users\test\.lmstudio\models\unsloth\Qwen3.6-35B-A3B-GGUF\Qwen3.6-35B-A3B-UD-Q2_K_XL.gguf echo Starting Llama Server... echo Model: %MODEL_PATH% llama-server.exe -m "%MODEL_PATH%" --chat-template-kwargs "{\"enable_thinking\": false}" --jinja -fit on -c 90000 -b 4096 -ub 1024 --reasoning off --presence-penalty 1.5 --repeat-penalty 1.0 --temp 0.6 --top-p 0.95 --min-p 0.0 --top-k 20 --context-shift --keep 1024 -np 1 if %ERRORLEVEL% NEQ 0 ( echo. echo [ERROR] Llama server exited with error code %ERRORLEVEL% pause )
submitted by