Improved Evidence Extraction and Metrics for Document Inconsistency Detection with LLMs

arXiv:2601.02627v2 Announce Type: replace-cross Abstract: Large language models (LLMs) are becoming useful in many domains due to their impressive abilities that arise from large training datasets and large model sizes. However, research on LLM-based approaches to document inconsistency detection is relatively limited. We address this gap by investigating evidence extraction capabilties of LLMs for document inconsistency detection. To this end, we introduce new comprehensive evidence-extraction metrics and a redact-and-retry framework with constrained filtering that substantially improves evidence extraction performance over other prompting methods. We support our approach with strong experimental results and release a new semi-synthetic dataset for evaluating evidence extraction.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top