ARTFEED — Contemporary Art Intelligence

Multi-Task Learning Outperforms State-of-the-Art in Medical Anomaly Detection

other · 2026-05-09

Researchers propose MTL-MAD, a multi-task learning framework for anomaly detection in medical images. Unlike recent methods that rely on a single pretext task and large pre-trained models, MTL-MAD learns multiple self-supervised and pseudo-labeling tasks from scratch using a Mixture-of-Experts (MoE) joint model. By integrating diverse proxy tasks, the model captures robust representations of normal anatomy, enabling anomaly scoring based on task performance during inference. Experiments on the BMAD benchmark, covering multiple medical imaging modalities, show MTL-MAD surpasses all state-of-the-art competitors. The approach demonstrates that multi-task learners can be highly effective for medical anomaly detection without requiring pre-training.

Key facts

  • MTL-MAD uses multiple self-supervised and pseudo-labeling tasks.
  • The model is trained from scratch with a Mixture-of-Experts (MoE) joint model.
  • Anomaly scores are derived from multi-task learner performance during inference.
  • Comprehensive experiments conducted on the BMAD benchmark.
  • BMAD includes a broad range of medical image modalities.
  • MTL-MAD outperforms all state-of-the-art competitors on BMAD.
  • The method does not rely on large-scale pre-trained models.
  • The approach learns robust representations of normal anatomical structures.

Entities

Sources