RL Framework for GNSS Interference Localization from RF Data
A novel framework for reinforcement learning tackles the issue of GNSS interference localization by framing it as an active sensing challenge. Utilizing a 2x2 patch antenna, the system gathers radio frequency data while an agent methodically navigates the surroundings to deduce the location of the emitter. This task is represented as a partially observable decision-making process, given the unclear single-snapshot measurements resulting from multipath propagation. The framework integrates advanced RF sensing with deep reinforcement learning and recurrent policy learning, exploring the use of both Deep Q-Networks and Proximal Policy Optimization techniques.
Key facts
- GNSS interference poses a threat to reliable positioning
- Localization is challenging in indoor and multipath-rich environments
- Formulated as an active sensing problem
- Uses reinforcement learning framework
- Agent sequentially explores environment
- Observations from 2x2 patch antenna
- Task modeled as partially observable decision process
- Combines deep RL with recurrent policy learning
Entities
—