Critical slowing down in diffusion models analyzed via O(n) model
A recent investigation published on arXiv (2605.12597) examines diffusion models, a type of generative AI, by utilizing them within the O(n) model of statistical field theory as n approaches infinity in the Gaussian limit. The findings reveal that employing a one-layer network to train a score model that aligns with the precise solution leads to critical slowing down in parameter learning, negatively impacting the generation process. This suggests that challenges in sampling near critical points remain, even with learned generative models. To tackle this issue, the research highlights the effectiveness of integrating architecture-aware strategies, providing valuable theoretical insights into the successes and failures of diffusion models, thus allowing for better control over their functionality.
Key facts
- The study is published on arXiv with ID 2605.12597.
- It analyzes diffusion models using the O(n) model in the Gaussian limit n → ∞.
- Training a score model with a one-layer network shows critical slowing down.
- Critical slowing down impacts both parameter learning and generation.
- Sampling near criticality remains difficult for learned generative models.
- Combining architecture-aware approaches can overcome the bottleneck.
- The work offers theoretical control over diffusion model behavior.
- Computational sampling has been central to sciences since mid-20th century.
Entities
Institutions
- arXiv