ARTFEED — Contemporary Art Intelligence

Study Challenges Attack Success Rate as Sole Metric for Medical Imaging AI Security

publication · 2026-04-22

A new research paper questions the adequacy of Attack Success Rate (ASR) for evaluating adversarial vulnerabilities in medical imaging AI. Published on arXiv under identifier 2604.16532v1, the study argues that ASR provides an incomplete picture by ignoring crucial factors like perturbation strength and image quality. The rise of complex architectures like Vision Transformers (ViTs), which challenge traditional Convolutional Neural Networks (CNNs), further complicates security assessments. These different learning approaches make it unclear if a single metric can effectively capture adversarial behavior. The research highlights serious concerns for clinical deployment as deep learning becomes more prevalent in medical image analysis. To address this gap, the authors propose a multi-metric evaluation framework. Their systematic empirical study examines adversarial transferability across different model architectures. The work emphasizes that vulnerability assessments must account for cross-architecture attack transferability to ensure robust clinical AI systems.

Key facts

  • Research paper published on arXiv with identifier 2604.16532v1
  • Challenges Attack Success Rate (ASR) as sole metric for AI security
  • Focuses on adversarial vulnerabilities in medical imaging models
  • Highlights concerns for clinical deployment of deep learning systems
  • Notes rise of Vision Transformers (ViTs) challenging Convolutional Neural Networks (CNNs)
  • Proposes multi-metric evaluation including perturbation strength and image quality
  • Examines cross-architecture attack transferability
  • Conducts systematic empirical study on adversarial behavior

Entities

Institutions

  • arXiv

Sources