LocalLLaMA

5070 Ti (New) vs 3090 (Used) to pair with 4070 for local LLMs?

I'm upgrading my setup to run larger models and need a second GPU to pair with my current RTX 4070 (12GB). My Workloads: LLMs: Up to 32B dense (Gemma 4 31B) and ~120B MoE (Qwen 122B10A). I mostly run Q4/IQ4/UD MXFP4 quants. Image diffusion model: F…