I'm just wondering about this because I know that having a local LLM model working within the browser could be really brilliant for a lot of applications. I'm just wondering if anything's been built now around it and if even LLM models are working at this stage that you can have an application within the browser that would use the person's own device to return LLM responses.
[link] [comments]