LocalLLaMA

Gpu reccommendations for Coding/chat LLM

Forgive my insolence, I'm a server engineer, not an ai specialist, so the following might have already been answered a million times already. I know how to set up the infrastructure, but not the differences in models or agents that run against them…