A User-Centric Analysis of Explainability in AI-Based Medical Image Diagnosis
arXiv:2605.02903v1 Announce Type: cross
Abstract: In recent years, AI systems in the medical domain have advanced significantly. However, despite outperforming humans, they are rarely used in practice since it is often not clear how they make their decisions. Optimal explanation and visualization of the decision process are often lacking. Therefore, we conducted a comparative user-centric analysis of the latest state-of-the-art textual, visual and multimodal explainable artificial intelligence (XAI) methods for medical image diagnosis. Our survey of 33 physicians showed that 88% agree that it is important that AI explains the diagnosis -- 64% even strongly agree. A combination of bounding box and report is rated better than the other tested XAI methods in the evaluated aspects understandability, completeness, speed, and applicability. We even tested the potential negative impact of false AI-based medical image diagnoses and found that 50% of the participants trusted false AI diagnoses over all tested XAI methods.