| Hello Local Llama! I was using Qwen3.5 35B since release and it was awesome. Was super excited to try Qwen 3.6 as agent + try out Opencode for the first time since I was having a couple critical tool call failures with 3.5 (using cline in VScode). Spent a few hours with Qwen yesterday building a directory with the information to allow it to directly SSH and make changes to my switch (I know it's butt clenching but I have config backups dont worry lol). It's been working flawlessly so far, cannot wait to continue developing this Agent.md to become my Opsec buddy. PC: Startup config (Recommended by Qwen team for agentic coding: ./build/bin/llama-server --model ./models/Qwen3.6-35B-A3B-UD-Q6_K_XL.gguf --n-gpu-layers auto --port 32200 --ctx-size 131072 --batch-size 4096 --ubatch-size 2048 --flash-attn on --threads 22 -ctk q8_0 -ctv q8_0 --jinja --temp 0.6 --top-p 0.95 --top-k 20 --min-p 0.0 --presence-penalty 0.0 Anyone else in the network engineering space using agents like this? Would love to hear more ways I can incorporate local models to assist me. [link] [comments] |