Physicians Demand Explainability in AI Medical Diagnosis
A user-centric study published on arXiv reveals that 88% of 33 surveyed physicians consider it important for AI to explain its diagnoses in medical imaging, with 64% strongly agreeing. The research compared textual, visual, and multimodal explainable AI (XAI) methods, finding that a combination of bounding box and report outperformed others in understandability, completeness, speed, and applicability. Alarmingly, 50% of participants trusted false AI diagnoses, highlighting risks in clinical adoption. The study underscores the gap between AI performance and practical trust, emphasizing the need for optimal explanation and visualization of decision processes.
Key facts
- Study published on arXiv (2605.02903) on AI explainability in medical imaging
- Survey of 33 physicians: 88% agree AI should explain diagnosis, 64% strongly agree
- Combination of bounding box and report rated best among XAI methods
- 50% of participants trusted false AI diagnoses
- Comparative analysis of textual, visual, and multimodal XAI methods
- AI systems rarely used in practice despite outperforming humans
- Evaluated aspects: understandability, completeness, speed, applicability
- Research conducted by authors of arXiv:2605.02903
Entities
Institutions
- arXiv