Conditional Factuality Controlled LLMs with Generalization Certificates via Conformal Sampling
arXiv:2603.27403v1 Announce Type: new
Abstract: Large language models (LLMs) need reliable test-time control of hallucinations. Existing conformal methods for LLMs typically provide only \emph{marginal} guarantees and rely on a single global threshold…