LocalLLaMA

Mimo2.5 (not pro) under llama.cpp? – primary model opencoder?

I tried running AesSedai/MiMo-2.5-GGUF:Q4-K-M under llama.cpp (main tree, compiled 36hours ago) Hardware: nvidia A6000 with 48GB RAM + 300GB CPU RAM I had no success: error loading model: missing tensor blk.0.attn_q.weight … Is Mimo already supported…