I use AI tools a lot these days, and I've noticed that the answers are often too "safe."
Most of the time, the answers are correct, but sometimes they seem a little too perfect or careful. It looks like the AI is trying to avoid saying anything that might be wrong or up for debate.
There were times when I wanted a more direct or honest answer, but I got something that was very fair and neutral instead. I get why that happens, but it can make the answer less useful at times.
Sometimes I don't want things to be perfect; I just want to see them clearly.
I was curious if other people feel the same way or if this is how AI is supposed to work.
Would you rather have these safe and balanced answers or something more direct and opinionated?
[link] [comments]