ARTFEED — Contemporary Art Intelligence

Risk-Aware Learning for Label Noise in Medical Imaging

other · 2026-04-29

A recent investigation published on arXiv (2604.23875) examines the effectiveness of noise-robust training techniques in ensuring clinical safety amidst label noise in medical image classification. The research analyzes Coteaching, DivideMix, UNICON, and a GMM-based filtering method using the binarized DermaMNIST and PathMNIST datasets under both clean and noisy scenarios (20% and 40% label noise). It highlights that false negatives (overlooked diseases) pose greater risks than false positives, yet most assessments focus on accuracy metrics. This study seeks to evaluate clinical risk beyond mere accuracy considerations.

Key facts

  • Study evaluates noise-robust methods under label noise for medical image classification
  • Methods tested: Coteaching, DivideMix, UNICON, GMM-based filtering
  • Datasets: binarized DermaMNIST and PathMNIST
  • Noise rates: clean, 20%, 40%
  • Focus on clinical risk, not just accuracy
  • False negatives have higher consequences than false positives
  • Published on arXiv with ID 2604.23875
  • Annotation errors arise from inter-observer variability and diagnostic ambiguity

Entities

Institutions

  • arXiv

Sources