FAITH: Factuality Alignment through Integrating Trustworthiness and Honestness
arXiv:2604.10189v1 Announce Type: new
Abstract: Large Language Models (LLMs) can generate factually inaccurate content even if they have corresponding knowledge, which critically undermines their reliability. Existing approaches attempt to mitigate th…