Most local LLM use cases I see are chat, coding, and RAG. But with vision models getting better and faster on consumer hardware, I feel like there is a lot of untapped territory.
I got a local VLM to play a board game by just looking at the screen and it worked way better than I expected.
What is the weirdest or most unexpected thing you have used a local model for?
[link] [comments]