Denoising Particle Filters for Efficient Robot State Estimation
Researchers propose a novel particle filtering algorithm for state estimation in robotics that avoids expensive end-to-end training. Instead, models are trained from individual state transitions, leveraging the Markov property. The measurement model is learned implicitly via denoising score matching, and at inference, the learned denoiser works with a dynamics model to solve Bayesian filtering equations step-by-step. This approach guides predicted states toward the data manifold informed by measurements. The method is evaluated on challenging robotic tasks, offering a more interpretable and efficient alternative to sequence-based learning.
Key facts
- Proposed algorithm trains models from individual state transitions.
- Exploits Markov property in robotic systems.
- Measurement model learned via denoising score matching.
- Inference uses learned denoiser and dynamics model.
- Approximately solves Bayesian filtering equation at each time step.
- Guides predicted states toward data manifold from measurements.
- Evaluated on challenging robotic tasks.
- Offers interpretable and efficient alternative to end-to-end training.
Entities
—