Neural Bridge Processes Improve Input Conditioning in Diffusion Models
A novel machine learning approach known as Neural Bridge Processes (NBPs) substitutes the input-independent forward kernel found in Neural Diffusion Processes (NDPs) with a bridge trajectory that is anchored to the input. This innovation enables the encoding of conditioning inputs in noisy training states, enhancing both expressivity and uncertainty-aware learning from partially observed context-target pairs. In scenarios where input and output dimensions are not aligned, NBPs establish an output-space anchor a_ψ(x)=P_ψ(x) to direct the generative trajectory while preserving the integrity of the denoising backbone. Theoretical evaluations indicate that process-level anchoring promotes pathwise input distinguishability, infuses noisy states with information about x, and establishes a direct gradient pathway. The research can be accessed on arXiv under ID 2508.07220.
Key facts
- Neural Bridge Processes (NBPs) replace the unconditional forward kernel with an input-anchored bridge trajectory.
- NBPs improve upon Neural Diffusion Processes (NDPs) by making noisy states encode conditioning inputs.
- When input and output dimensions differ, NBP learns an output-space anchor a_ψ(x)=P_ψ(x).
- Process-level anchoring induces pathwise input distinguishability.
- The method injects information about x into noisy states.
- It creates a direct gradient pathway unavailable in previous approaches.
- The paper is arXiv:2508.07220v3.
- The announcement type is replace-cross.
Entities
Institutions
- arXiv