Explainability Framework Analyzes Generative Diffusion Models for MRI Synthesis
A new study on arXiv investigates the explainability of generative diffusion models for medical imaging, specifically MRI synthesis. The research proposes a faithfulness-based explainability framework that analyzes how prototype-based methods like ProtoPNet (PPNet), Enhanced ProtoPNet (EPPNet), and ProtoPool link generated features to training features. The study focuses on understanding image formation through the denoising trajectory of diffusion models, combined with prototype explainability and faithfulness analysis. Experimental results show that EPPNet achieves the highest faithfulness score of 0.1534, offering more reliable insights into the generative process. The work addresses the opacity of diffusion models' internal decision-making, aiming to improve trust in AI-generated medical images.
Key facts
- Study investigates explainability of generative diffusion models for MRI synthesis.
- Proposes faithfulness-based explainability framework.
- Analyzes prototype-based methods: ProtoPNet (PPNet), Enhanced ProtoPNet (EPPNet), and ProtoPool.
- Focuses on denoising trajectory of diffusion models.
- EPPNet achieves highest faithfulness score of 0.1534.
- Aims to address opacity of diffusion models' decision-making.
- Published on arXiv with ID 2602.09781v2.
- Research is in the field of medical imaging and AI explainability.
Entities
Institutions
- arXiv