LocalLLaMA

Running Gemma4 26B A4B on the Rockchip NPU using a custom llama.cpp fork. Impressive results for just 4W of power usage!

submitted by /u/Inv1si [link] [comments]