LIR Algorithm Unifies EM, Belief Propagation, GANs, and GFlowNets
A new generic algorithm called Local Inconsistency Resolution (LIR) has been introduced for learning and approximate inference in probabilistic models. Built on Probabilistic Dependency Graphs (PDGs), LIR provides a flexible framework for capturing inconsistent beliefs and resolving them iteratively by focusing attention on subsets of the model. The algorithm unifies and generalizes several important methods, including Expectation-Maximization (EM), belief propagation, adversarial training, GANs, and GFlowNets. Notably, LIR suggests a more natural loss for GFlowNets, improving convergence. Each method is recovered as a specific instance of LIR by choosing procedures for attention and control. The algorithm is implemented for discrete PDGs, and the paper is available on arXiv (2604.17140).
Key facts
- LIR is a generic algorithm for learning and approximate inference.
- It is built upon Probabilistic Dependency Graphs (PDGs).
- LIR unifies EM, belief propagation, adversarial training, GANs, and GFlowNets.
- LIR suggests a more natural loss for GFlowNets, improving convergence.
- Each method is recovered by choosing attention and control procedures.
- The algorithm is implemented for discrete PDGs.
- The paper is available on arXiv with ID 2604.17140.
- LIR has an intuitive epistemic interpretation: iteratively focus and resolve inconsistencies.
Entities
Institutions
- arXiv