REFNet++: Efficient Fusion of Camera and Radar Data in Bird's-Eye Polar View
A research paper titled REFNet++ introduces a multi-task framework for fusing camera and radar sensor data in a bird's-eye polar view. The approach leverages raw range-Doppler spectrum from radar and front-view camera images as inputs. A variational encoder-decoder architecture learns to transform front-view camera data into the Bird's-Eye View (BEV) polar domain, while a radar encoder-decoder recovers angle information. The method prioritizes both accuracy and computational efficiency for multimodal sensor fusion in autonomous driving perception.
Key facts
- Paper titled REFNet++ proposes multi-task fusion of camera and radar data
- Uses raw range-Doppler spectrum from radar and front-view camera images
- Employs variational encoder-decoder architecture for BEV polar domain transformation
- Radar encoder-decoder recovers angle information
- Focuses on accuracy and computational efficiency
- Addresses multimodal sensor fusion in autonomous driving
- Published on arXiv with ID 2605.11824
- Radar sensors are robust in variable weather but noisy
Entities
Institutions
- arXiv