AutoAWG Framework Advances Adverse Weather Video Generation for Autonomous Driving
AutoAWG has unveiled a new framework for generating controllable adverse weather videos tailored for autonomous driving. This innovative approach tackles the ongoing issue of insufficient real-world video data in challenging weather, which is vital for enhancing perception reliability. It utilizes a semantics-guided adaptive fusion technique to effectively merge multiple controls, ensuring a balance between robust weather effects and the accurate representation of safety-critical elements. By employing a vanishing point-anchored temporal synthesis method, the framework creates training sequences from static images, minimizing the need for synthetic data. Enhanced masked training contributes to stable long-horizon generation. On the nuScenes validation set, AutoAWG surpasses previous state-of-the-art techniques, achieving superior FID and FVD metrics without relying on first-frame conditioning, addressing the shortcomings of existing weather generation methods.
Key facts
- AutoAWG is a controllable adverse weather video generation framework for autonomous driving
- Addresses scarcity of real-world video data in adverse weather conditions
- Uses semantics-guided adaptive fusion of multiple controls
- Employs vanishing point-anchored temporal synthesis strategy
- Constructs training sequences from static images
- Reduces reliance on synthetic data
- Uses masked training for long-horizon generation stability
- Outperforms prior methods on nuScenes validation set
Entities
—