I'm hoping to crowdsource examples of things ChatGPT does not know about. These are useful for experiments to find out how it responds to leading questions: when it admits that it doesn't know, when it gives BS responses that are useless rather than factually false, and when it straight up says false statements.
I'll start: Carla Speed McNeil's _Finder_ series. Maybe because they're graphic novels and the training process primarily consists of text (scraped from Common Crawl or books), and maybe because it's somewhat niche, ChatGPT does not know the basic plot of most _Finder_ stories. I've managed to get all three types of responses: admitting ignorance, useless but not wrong, and wrong. When "thinking" mode is on, it finds what it needs from fan websites and gives correct responses. Google's built-in AI when you search also gives correct answers, presumably for the same reason.
But what other things—books, franchises, real-world places, history, whatever—have you found that ChatGPT consistently does not know anything about? Be sure to switch "thinking" to "instant" to keep it from searching the web, or from searching deeply.
[link] [comments]