I don’t know much about the tech behind LLMs; hoping some of you all can help. My understanding is GPT is drawing on its training data to generate a response. It does a good job of sounding conversational and can construct something well written quickly, but it is basically picking what seems most correct or appropriate one word at a time. I’ve seen a user here describe it as picking the middle option on your suggested text words 20 times in a row.
If that’s the case, won’t it always hallucinate? It’s not like it knows that it’s lying to you, it’s just piecing together a response that seems the most correct? Even if you tell it not to lie or have a setting to tell you when it doesn’t know something. That would be impossible because it goes against how it is designed to work?
[link] [comments]