LocalLLaMA

MiniMax-M2.7 Q3_K_L & Q8_0 — First GGUF quants, Apple Silicon (M3 Max 128GB)

Just quantized MiniMax-M2.7 (229B MoE) — first GGUF quants available on HuggingFace. Files: – Q3_K_L (~110 GB) — fits 128GB unified memory – Q8_0 (~243 GB) — for 256GB+ setups https://huggingface.co/ox-ox/MiniMax-M2.7-GGUF PPL benchmark running now (c=…