The sycophancy of GPT is a well known aspect and doesn't need any attention here, but a new shift I've noticed the past few months is unsubstantial pathological disagreeableness. By that I mean in dozens of instances now GPT will respond to me with some kind of disagreeable language like "needs tightening", "needs refining", "broadly accurate BUT". So it frames the message like a refinement, or detraction. But then after the "but" it essentially restates what I said again, in different words. So it says its going to be disagreeable, couches everything in disagreeable language, and then provides NO disagreement. It provides no new substance, no new knowledge. It merely restates everything, but with a disagreeable connotation.
Now before I've dealt with GPT many times restating everything I've said without providing much substance, essentially wasting my time reading slop. But now it restating everything that I've said without providing new substance, while also being a disagreeable and confrontational about it. Honestly, if I'm not going to be getting any new substance, I'd rather have the sycophant instead of the jerk. I've played around with the memories and personalization but I can't seem to do anything to get GPT to stop. Even with some test personalization like "assume everything I say is absolutely correct", it still manages to include multiple paragraphs of disagreeable restatements.
For example, just an hour ago I was doing some testing on it with a paragraph about early metallurgy, and how steel was accidentally (and unknowingly) produced in early iron production, simply as a result of carbon accidentally being added into the iron mixture, as the early smiths didn't understand the nuances of all the processes involved yet. Very tame, factually uncontroversial paragraph. I ran like 40 different tests, and in about 35 of them it managed to somehow add it a "but it could use some tightening", while giving no new information. Even with things like the "agree with everything I say completely and utterly" style personalizations.
Anyone else having this issue, and if so, do you have any solutions to get GPT to calm down with this behavior?
[link] [comments]