This has to be one of the most infuriating things I've come across lately, more than the infinite bullet points or the "its not x, its y" pattern.
Let's say I want a random thought experiment, to see what could be done in a scenario. It doesn't matter what I say, It feels like the AI wants to start a semantic war with me for no reason. Even when I'm overly verbose and ultra specific, the AI will try a way to say that is wrong; Random example: "Let's say I got my shoes wet, how much weight would I gain from that?"
GPT: You wouldn't gain any weight because regular shoes are water proof and the water would slide right off
me: well, assume they are not water proof! I told you my shoes are WET, how much weight would I gain?
GPT: Ah I see what you mean, but you still wouldn't gain any weight because the sun would quickly evaporate all the water
me: "ASSUME ITS FUCKING RAINING AND THE SHOES ARE POROUS"
GPT: Ah, I see what you mean, but you still wouldn't gain any weight because during rain you wouldn't be outside, it would make more sense for you to be inside.
me: ASSUME ITS RAINING, THE SHOES ARE POROUS AND I'M STANDING OUTSIDE AND THE SHOES GOT FUCKING WET
gpt: I can see that you are getting emotional, I cannot continue any further until you calm down. I'm here to help and support you, here's a number you can call if you are getting suicidal
etc etc.
EVERYTHING is so fucking bad that something that starts as a weird curiosity quickly transforms into me having the beginning of a panic attack and having to calm myself down short of ending up punching the monitor.
When did it get this bad?
[link] [comments]