| Hey r/LocalLLaMA, we did an investigation into MiniMax-M2.7 GGUF causing NaNs on perplexity. Our findings show the issue affects 21%-38% of all GGUFs on Hugging Face (not just ours). - Other popular community uploaders have 38% (10/26) NaNs, another deleted theirs (1/4), and 22% of ours had NaNs (5/23) - we fixed ours.
- When running 99.9% KLD and other metrics, all are fine.
- We found overflowing in llama.cpp to be the culprit.
- We did PPL, KLD 99.9% benchmarks as well - lower left is better.
https://preview.redd.it/46i7z9e1m7vg1.png?width=1600&format=png&auto=webp&s=bbfe77263d210211c1fc0d7a6a973d7027ce18af - Perplexity NaNs during block 32 - this was also found by the community and other quant uploaders. We also found block 311 to cause issues.
- We found that
blk.61.ffn_down_exps was the culprit - Q5_K and Q4_K of these produce NaNs starting at chunk 32 during PPL evals. Interestingly IQ4_XS, IQ3_XXS and smaller I quant types do not NaN. - This was quite confusing, since lower bit quants (Q2_K_XL for eg) did NOT NaN, but medium sized quants did (Q4_K_XL)!
- We’ve now updated the M2.7 quants at https://huggingface.co/unsloth/MiniMax-M2.7-GGUF to alleviate the issue, though we still do not know the exact cause of the NaN perplexities - it could be a fluke, or most likely large multiplies causing overflows.
Which quants did we test? - 10/26 NaNs (38%) found at https://huggingface.co/bartowski/MiniMaxAI_MiniMax-M2.7-GGUF: Chunk-32 failures (9): IQ3_XXS, IQ3_XS, IQ3_M, Q3_K_M, Q3_K_L, Q3_K_XL, Q4_K_S, Q4_1, Q5_K_S. Late failure (1): IQ1_S (crashed at chunk 311)
- 5/23 NaNs (21%) ours had NaNs - all fixed now at https://huggingface.co/unsloth/MiniMax-M2.7-GGUF: UD-Q4_K_S, UD-Q4_K_M, UD-Q4_K_XL, UD-Q5_K_S, MXFP4_MOE. All block 32.
- 1/4 NaN Q4_K_M at https://huggingface.co/AesSedai/MiniMax-M2.7-GGUF was deleted due to NaNs. Block 32 as well.
Also, CUDA 13.2 is still definitely an issue. This causes some low bit quants on all models to get gibberish. Some people have dismissed it as not being an issue, but from what we’ve seen, more than 50 people have now confirmed that using CUDA 13.1 and lower fixes it. You can also see some of the public comments in our Hugging Face discussions, Reddit posts etc. NVIDIA has acknowledged that they are investigating the issue - see Unsloth Issue 4849, llama.cpp issue 21255, issue 21371 If you have any questions please do ask and thank you again for all the support as always. Appreciate it and hope you have a lovely week. submitted by /u/danielhanchen [link] [comments] |