Since copilot has changed it's billing model, become super expensive, I'm starting to think the possibility of running local LLM myself. But I'm not sure what kind of device is suitable for this kind of usage?
A Mac with large RAM such as 128GB
A Windows with RTX5070/5080/5090, but will the memory limit become a serious problem?
A mini super computer, such as Spark DGX, but I've heard it's relatively slow in comparison to the others?
Can you share your experience about how to pick a device for running local LLm? Thanks for the advice!
[link] [comments]