LocalLLaMA

LocalLLaMA

Best second GPU for RTX 4070 Super?

So i currently have an rtx 4070 super, and it can easily run models like gemma3 12b and even gpt-oss 20b (although it takes up to a minute to generate a response). I want to get a second gpu so i can run larger models around 20b-30b params. What gpu do…

LocalLLaMA

I’m so sick of coding and agents

This is an unhelpful rant, but it's been getting to me. I don't code. I don't care about python. I don't know and don't care how agents work and what they do. I don't build websites and I couldn't care less about github inte…

LocalLLaMA

My thought on Qwen and Gemma

This spring is really hot since the localLLM giant, both Qwen and Gemma released major models. I'm really excited with those release and happy with their capability. Both are real hero for local LLM, although I have feeling they have different stre…

Scroll to Top