Weird behavior on AI

Weird behavior on AI

From my observation using ChatGPT and going through my old chat logs, I noticed a pattern that appears consistently across every AI I've used.

When someone writes with broken or non-standard grammar but the actual topic or concept they're discussing is specific and deep, the AI receives two conflicting signals. In its training data, broken grammar usually comes paired with simple content, and deep concepts usually come paired with clean writing. These two things rarely appear together.

So when both show up at the same time, the model doesn't know how to handle it. Instead of just answering what was asked, it adds more — trying to meet in the middle of two patterns that don't belong together. That added content isn't coming from what you actually said. It's the model patching its own confusion, which causes the response to drift away from what you actually meant.

The broader the topic, the worse it gets. Less grounding means more room to expand and fill space with plausible-sounding content that isn't really answering anything.

I'm calling it pattern mismatch compensation. I don't think this specific variable has been formally tested, even though pieces of it show up in existing research on overgeneration and prompt sensitivity.

I have screenshots showing the same drift across both Claude and ChatGPT — same input, different models, same behavior.

Has anyone seen this studied or does it already have a name?

submitted by /u/StationFamous9352
[link] [comments]

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top