LocalLLaMA

LocalLLaMA

Minimax 2.7: good news!

Updated 2 hours ago. Thanks to Yuanhe134 for the clarification. We're eagerly awaiting this update because we know how important this model is to the community. submitted by /u/LegacyRemaster [link] [comments]

LocalLLaMA

MiniMax-M2.7 …. this weekend for sure

Sorry to all OOS developers. I underestimated the workload required for open-sourcing. We still have some infrastructure adaptation work in progress. M2.7 is expected to be released this weekend. Thank you for your understanding. submitt…

LocalLLaMA

daVinci-LLM-3B

– https://huggingface.co/SII-GAIR-NLP/davinci-llm-model Overview daVinci-LLM-3B is a 3B-parameter base language model presented in daVinci-LLM: Towards the Science of Pretraining. This project aims to make the pretraining process a transparent an…

LocalLLaMA

Qwen3.5-397B is shockingly useful at Q2

Quick specs, this is a workstation that was morphed into something LocalLLaMa friendly over time: 3950x 96GB DDR4 (dual channel, running at 3000mhz) w6800 + Rx6800 (48GB of VRAM at ~512GB/s) most tests done with ~20k context; kv-cache at q8_0 llama cp…

Scroll to Top