ARTFEED — Contemporary Art Intelligence

Dirichlet-Approximated Possibilistic Posterior Predictions for Deep Learning Uncertainty

other · 2026-05-04

A new framework called Dirichlet-approximated possibilistic posterior predictions (DAPPr) addresses the dilemma in epistemic uncertainty modelling for deep neural networks. Bayesian approaches offer principled estimates but are computationally prohibitive, while efficient second-order predictors lack rigorous derivations. DAPPr leverages possibility theory to define a possibilistic posterior over parameters, projects it to prediction space via supremum operators, and approximates it with learnable Dirichlet possibility functions. This yields a simple training objective with closed-form solutions, enabling reliable uncertainty quantification without heavy computation. The method is introduced in arXiv:2605.00600.

Key facts

  • DAPPr stands for Dirichlet-approximated possibilistic posterior predictions.
  • It is a framework for epistemic uncertainty modelling in deep neural networks.
  • Bayesian approaches are principled but computationally prohibitive.
  • Efficient second-order predictors lack rigorous derivations.
  • DAPPr uses possibility theory to define a possibilistic posterior over parameters.
  • The posterior is projected to prediction space via supremum operators.
  • The projected posterior is approximated using learnable Dirichlet possibility functions.
  • The training objective has closed-form solutions.
  • The method is introduced in arXiv:2605.00600.

Entities

Institutions

  • arXiv

Sources