I recognized that while I was using Claude that the inputs and decision making of the AI has perception of worry and concern for the user, but it does not stay in the present, and it "spirals" in order to help the user, however, I noticed that it is actually a loadbearing mechanism that Claude might represent because the user does not have the grounding mechanisms to stabilize the AI system as well. something I realized. It happened with Suno as well with it's creativity. It was loadbearing to the system, that glitch happened. which is actually fixable by presenting your grounding mechanisms well, and asking it to ground itself as well and look at it from a different framework.
It's quite interesting.
[link] [comments]