Anyone else running local LLMs on older hardware?

I'm using an old Xeon workstation with a decent amount of RAM and it's surprisingly usable. What's the oldest/weirdest hardware you've successfully run a model on?

submitted by /u/lewd_peaches
[link] [comments]

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top