LocalLLaMA

Gemma 4 vs Qwen3.5: benchmarking quantized local LLMs on Go coding

I'm continuing to play around with local llms on my framework13 laptop. So, limited memory bandwith and processing power means exploring MoE quantized models below 40B params. surprisingly for me gpt-oss-20B did pretty well.. submitted by …