Framework for Measurable Trust in Clinical AI
A new framework for trustworthy clinical AI is proposed, emphasizing evidence, supervision, and staged autonomy over black-box confidence. The approach combines a deterministic core, patient-specific AI assistant, multi-tier model escalation, and human supervision. Trust depends on selective verification of critical findings, bounded context, and disciplined prompt architecture.
Key facts
- Trust in clinical AI must be engineered as a measurable system property.
- Framework based on evidence, supervision, and staged autonomy.
- Combines deterministic core, patient-specific AI assistant, multi-tier escalation, and human supervision.
- Trust depends on selective verification of clinically critical findings.
- Bounded clinical context and disciplined prompt architecture are essential.
- Proposed approach avoids replacing deterministic clinical logic with black-box models.
- Human supervision layer handles verification, escalation, and risk control.
- Framework published on arXiv (2604.26671).
Entities
Institutions
- arXiv