Epistemic reflections on AI answering our questions: overwatch, erudite, logician, interlocutor

arXiv:2304.14352v2 Announce Type: replace-cross Abstract: Currently, there is a trend for the wider public to rely on LLMs for financial or legal consultation, medical and mental support (Chatterji et al., 2025), often accepting the advice provided without necessarily seeking logical verification or empirical validation. While one might be fortunate enough to encounter a model with a particularly solid 'ground truth' or with auxiliary logic-symbolic reasoning capabilities, it remains a somewhat uncertain endeavour. Output is simply taken at face value, without further question. Yet, careless reliance on AI to answer our questions and to judge our output is a violation of Grice's Maxim of Quality as well as a violation of Lemoine's legal Maxim of Innocence. A low-sensitivity plagiarism scanner may produce a Type II error by failing to detect difference (the null hypothesis wrongly maintained). The fallacy of affirming the consequent occurs when the failure to detect difference is then interpreted as evidence of equivalence or demonstration of AI authorship. If the test is specified so that 'AI-generated' is effectively treated as the default H0, then a finding of 'no difference from AI' is taken as support for that null. Such a mis-specified test results in students being treated as guilty (AI/plagiarism) unless suspects can generate sufficient detectable difference from AI output, which yields false accusations under a fair null hypothesis (that the student wrote the work). To avoid LLMs becoming a sorcerer's apprentice, knowledge is required about which inference systems are or should become integrated for an LLM to become a trustworthy sparring partner. We end on a wider perspective where the formalisation of the observer effect shows that uncertainty, classification, and interpretation are already shaped by the human or artificial agency's belief system, affective state, and tolerance for ambiguity, rather than at the stage of LLM output.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top