Tell me about the time AI lied to you! I’m researching a mathematical way to stop these hallucinations

If you’ve spent any time with ChatGPT, you’ve probably been lied to. We’ve all been there.

I have a particularly bitter memory. Last Christmas, the toy my kid wanted was sold out everywhere. I asked an AI to find a local shop that had it in stock. It confidently gave me a store name and an address just 30 minutes away. I rushed there, full of hope... only to find it wasn't a toy store at all. It was a restaurant.

That experience pushed me to study why AIs lie. Recently, I discovered that right before a hallucination occurs, an abnormal behavior—a "geometric distortion"—appears within the AI's internal mathematical states.

To take this research further, I need your help. Could you share your stories of when an AI lied to you? No lie is too small! I want to use these real-world examples as validation data for my research.

You can see the details of my work on GitHub:https://github.com/yubainu/sibainu-engine

Let’s build a future where AI doesn't have to lie to us!

submitted by /u/Fast_Tradition6074
[link] [comments]

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top