I run an AI-based fact-checking platform and I refuse to let the LLM produce the verdict. Here’s why.
After a year building a production fact-checking system, the single most counter-intuitive design decision I keep defending is this: the LLM in our pipeline never produces a numeric score, never produces a true/false verdict, never produces anything th…