ChatGPT

If ChatGPT can reach the same answer through completely different reasoning paths… what does “correct” actually mean?

I’ve been testing something recently and it’s starting to mess with how I think about “correct answers.” Same prompt. Same model. Same temperature and settings. But the outputs don’t just vary a little. Sometimes they take completely different reasonin…