Gemma4 26B A4B NVFP4 GGUF
Hey everyone! I’ve just uploaded a GGUF version of nvidia/Gemma-4-26B-A4B-NVFP4. It is not currently possible to run it with the main branch of llama.cpp, so I’ve also made a Docker image for it. It’s available at catlilface/llama.cpp:gemma4_26b_nvfp4…