| https://huggingface.co/XiaomiMiMo/MiMo-V2.5 Model Summary - Architecture: Sparse MoE (Mixture of Experts), 310B total / 15B activated parameters
- Context Length: Up to 1M tokens
- Modalities: Text, Image, Video, Audio
- Vision Encoder: 729M-param ViT (28 layers: 24 SWA + 4 Full)
- Audio Encoder: 261M-param Audio Transformer (24 layers: 12 SWA + 12 Full)
- Multi-Token Prediction (MTP): 329M parameters, 3 layers
submitted by /u/jacek2023 [link] [comments] |