feat: Add Mimo v2.5 model support by AesSedai · Pull Request #22493 · ggml-org/llama.cpp

feat: Add Mimo v2.5 model support by AesSedai · Pull Request #22493 · ggml-org/llama.cpp

https://huggingface.co/XiaomiMiMo/MiMo-V2.5

Model Summary

  • Architecture: Sparse MoE (Mixture of Experts), 310B total / 15B activated parameters
  • Context Length: Up to 1M tokens
  • Modalities: Text, Image, Video, Audio
  • Vision Encoder: 729M-param ViT (28 layers: 24 SWA + 4 Full)
  • Audio Encoder: 261M-param Audio Transformer (24 layers: 12 SWA + 12 Full)
  • Multi-Token Prediction (MTP): 329M parameters, 3 layers
submitted by /u/jacek2023
[link] [comments]

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top