Author name: /u/FenderMoon

LocalLLaMA

Prompts you use to test/trip up your LLMs

I'm obsessed with finding prompts to test the quality of different local models. I've pretty much landed on several that I use across the board. Tell me about the Apple A6 (a pass is if it mentions Apple made their own microarchitecture called…

LocalLLaMA

Gemma4 26B A4B runs easily on 16GB Macs

Typically, models in the 26B-class range are difficult to run on 16GB macs because any GPU acceleration requires the accelerated layers to sit entirely within wired memory. It's possible with aggressive quants (2 bits, or maybe a very lightweight I…

Scroll to Top