ARTFEED — Contemporary Art Intelligence

Unsupervised diffusion autoencoder restores artifacts in handheld fundus images

ai-technology · 2026-04-20

A novel diffusion autoencoder model tackles the challenge of restoring artifacts in handheld fundus images without the need for paired supervision. This technique is trained solely on high-quality table-top fundus images, which it then utilizes to enhance degraded handheld images. Common issues such as flash reflections, exposure inconsistencies, and motion blur often compromise the quality of these portable images, which can obstruct further analysis. Although generative models have proven useful for image restoration, many depend on paired supervision or specific artifact structures, which restrict their flexibility with unstructured degradations. The new method combines a context encoder with the denoising process to develop semantically relevant representations, facilitating more efficient and cost-effective ophthalmologic diagnoses and disease screenings via handheld devices. This study is available in arXiv:2604.15723v1.

Key facts

  • The model is an unsupervised diffusion autoencoder
  • It integrates a context encoder with denoising
  • Training uses only high-quality table-top fundus images
  • It restores artifact-affected handheld acquisitions
  • Artifacts include flash reflections and motion-induced blur
  • Handheld fundus imaging devices improve accessibility
  • Most generative models depend on paired supervision
  • The research is published as arXiv:2604.15723v1

Entities

Sources