ARTFEED — Contemporary Art Intelligence

Atomic Fact-Checking Boosts Clinician Trust in AI Oncology Recommendations

other · 2026-05-07

A study released on arXiv reveals that atomic fact-checking markedly enhances the trust clinicians place in recommendations from large language models (LLMs) for oncology decision-making. The research involved 356 clinicians who provided 7,476 trust ratings. By breaking down AI treatment suggestions into separately verifiable claims associated with source guideline documents, atomic fact-checking led to a significant increase in trust (Cohen's d = 0.94), raising the percentage of trusting clinicians from 26.9% to 66.5%. In contrast, traditional transparency methods showed a gradual improvement over baseline (d = 0.25 to 0.50). These results indicate that disaggregating AI recommendations into verifiable claims linked to guidelines results in much greater clinician trust compared to conventional explainability methods in critical clinical scenarios.

Key facts

  • Randomized controlled trial on atomic fact-checking for LLM recommendations in oncology
  • 356 clinicians participated, generating 7,476 trust ratings
  • Atomic fact-checking increased trust from 26.9% to 66.5%
  • Cohen's d = 0.94 for atomic fact-checking
  • Traditional transparency mechanisms showed d = 0.25 to 0.50
  • Claims linked to source guideline documents
  • Study published on arXiv under Computer Science > Computation and Language
  • Focus on high-stakes clinical decisions

Entities

Institutions

  • arXiv

Sources