LocalLLaMA

Built myself a bit of a local llm workhorse. What’s a good model to try out with llamacpp that will put my 56G of VRAM to good use? Any other fun suggestions?

submitted by /u/SBoots [link] [comments]