Local models are a godsend when it comes to discussing personal matters

I’ve been keeping a personal journal for the past few years. The entire thing is made up of over 100k+ tokens. I noticed that some of the Gemma 4 models support 256k context, so I decided to test the 26B A4B model out by sharing my entire personal journal in the initial prompt and asking for some insights.

Obviously, I didn’t simply just say "share your insights, make no mistakes." I am fully aware of the fact that LLMs have the potential to glaze users. That's why I gave it some guided questions like:

  • "What topics or concerns come up repeatedly?"
  • "What have I been avoiding thinking about?"
  • "How has my thinking about [insert topic] evolved?"
  • "What were my major preoccupations each year?"
  • "Where do my stated values conflict with my described actions?"
  • "What do I say I want but rarely pursue?"

And Gemma 4 shared some really great insights. Things I hadn’t noticed, or had noticed back then but ended up forgetting over the years.

While some people may not hesitate to share personal details from their lives with ChatGPT and whatnot, I personally wouldn’t even consider sharing my personal life with a model hosted on RunPod, let alone with proprietary models. That’s why local models like Gemma 4 are a godsend for me. It’s crazy that I can talk about this kind of stuff with my own computer—things I’d be hesitant to share even with my closest friends—and get good answers, too. We really are living in a sci-fi world now.

submitted by /u/iamtheworldwalker
[link] [comments]

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top