ARTFEED — Contemporary Art Intelligence

UbiQVision Framework Quantifies Uncertainty in XAI for Medical Imaging

ai-technology · 2026-04-25

A new framework called UbiQVision addresses instability in SHAP explanations for deep learning models used in medical imaging. The method combines Dirichlet posterior sampling and Dempster-Shafer theory to quantify epistemic and aleatoric uncertainty. It produces belief, plausible, and fusion maps to improve reliability of model interpretability. The research targets increasingly complex architectures like ResNets, Vision Transformers, and Hybrid CNNs, which often compromise explainability. The study is published on arXiv under ID 2512.20288.

Key facts

  • UbiQVision uses Dirichlet posterior sampling and Dempster-Shafer theory
  • Focuses on quantifying uncertainty in SHAP explanations
  • Addresses epistemic and aleatoric uncertainty in medical imaging
  • Produces belief, plausible, and fusion maps
  • Targets ResNets, Vision Transformers, and Hybrid CNNs
  • Published on arXiv with ID 2512.20288

Entities

Institutions

  • arXiv

Sources