People-Centred Medical Image Analysis Proposes Fairer AI Integration
A new arXiv preprint argues that data-centric medical AI has failed to achieve widespread clinical adoption due to insufficient focus on fair performance across diverse patient populations and workflow integration. The authors identify performance biases as creating regulatory barriers and poorly integrated automation as disrupting clinical routines, degrading human-AI collaboration, and reducing clinician willingness to adopt AI tools. They note that prior work on workflow integration (e.g., Learning to Defer and Learning to Complement) and AI fairness has examined these challenges in isolation, overlooking their interdependence and practical constraints like restricted clinician availability. The paper proposes a people-centred approach to medical image analysis that jointly optimizes fairness and workflow integration.
Key facts
- arXiv:2604.26991v1 is a cross-type announcement.
- Recent data-centric medical AI has produced accurate diagnostic systems but limited clinical adoption.
- Limited uptake is attributed to insufficient attention to fair performance across diverse populations and workflow integration.
- Performance biases can create regulatory barriers.
- Poorly integrated automation can disrupt clinical routines and degrade human-AI collaboration.
- Prior work on Learning to Defer (L2D) and Learning to Complement (L2C) has examined challenges in isolation.
- Practical constraints include restricted clinician availability.
- The paper proposes a People-Centred Medical Image Analysis approach.
Entities
Institutions
- arXiv