LocalLLaMA

Qwen3.6-27B-3bit-mlx ยท Hugging Face: 3 & 5 mixed quant for RAM poor Mac users.

Just dropped a 3bit mixed quant (5bit for embeds and prediction layers) for Mac users. There was only one 3 bit version of this model (from Unsloth), but it was very heavy and painfully slow: https://huggingface.co/models?other=base_model:quantiz…