Literally no 3rd party api inference provider is hosting the mimo-2.5 series models from Xiaomi. They seem to be reallly good.
High token efficiency and very low halucination rate compared to Kimi-k2.6, Deepseek-V4 or GLM-5.1, and yet no provider not even chutes is hosting it other than Xiaomi themselves.
I find it very strange.
[link] [comments]