GPU strategy for local LLM + mixed workloads (70-person company) — NVIDIA vs AMD?
Hey all, we’re a mid-sized company (~70 people) and currently planning to bring a lot of our workloads on-prem instead of relying on cloud APIs. The goal for the moment is to run small to mid-sized models in the range of 30B like Qwen3.6 or Gemma4. Us…