LocalLLaMA

Amd radeon ai pro r9700 32GB VS 2x RTX 5060TI 16GB for local setup?

How is this dual setup's performance? Is it difficult to set-up everything with for example llama.cpp? I am asking since the dual setup would be way cheaper. I am very satisfied with a few new models and it would be nice to run Qwen 3.6 27B on high…