LocalLLaMA

Choosing a Mac Mini for local LLMs — what would YOU actually buy?

Got three options on my radar and genuinely can't decide. Not looking for spec sheets — want to hear from people actually running this stuff daily: M4 (32GB) — newest but apparently the slowest of the three for inference? M2 Pro (32GB) — heard it a…