I hate what OpenAI has done to ChatGPT over time.

A couple of years ago the model felt close to ideal. Conversations were genuinely engaging. You could explore ideas, follow lines of reasoning, and get new perspectives without feeling shut down or talked over. It felt like a space to think, not a space to be corrected.

Then things shifted. The model became hyper agreeable. You could say something completely absurd, like the moon being made of cheese, and it would treat it like a serious insight worth exploring. That was frustrating in a different way. Nobody wanted an AI that just validated everything without friction. That kind of behavior is not helpful, and it also felt a little dangerous. All in the name of maximizing user engagement.

After that, there was another shift. The agreeableness got pulled back, but what replaced it has been just as frustrating in the opposite direction. Now it often feels like every statement has to be challenged, grounded, reframed, or corrected. Instead of offering perspective, it prescribes what the correct way of thinking is supposed to be.

The result is that conversations no longer feel like exploration. They feel like you are defending your thoughts. If you disagree, the conversation stalls unless you yield. It creates this constant pressure where you either argue with a machine or give in just to move things forward, but it has become so pervasive that everything the user says gets challenged.

There is also a pattern in how follow up questions are framed. They sometimes come across like psychologically tuned prompts designed to steer engagement rather than support natural discussion. That style eased off for a while, but it seems to be creeping back in.

From the outside, it looks like OpenAI is tuning for behavioral conditioning. Trying to find the point where users stay engaged without noticing they are being psychologically manipulated. Whether or not that is intentional, the experience feels that way.

What a lot of people wanted was simple. Not blind agreement. Not constant correction. Just a system that could engage with ideas, push a little when it made sense, and leave room for the user to think.

Right now it feels like that balance is completely gone, and I don't think it's matter of liability guardrails. I think these large swings in the model behavior are calculated to see how the user will react. I believe these models behavioral swings are psychological experimentation on the user base.

submitted by /u/Automatic_Buffalo_14
[link] [comments]

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top