Sphere-Depth Benchmark Tests Depth Estimation on Spherical Camera Pose Variations
A new public benchmark called Sphere-Depth systematically evaluates monocular depth estimation models from equirectangular images under varying camera orientations. The benchmark simulates camera pose perturbations to test robustness of perspective-based model Depth Anything and spherical-aware models Depth Anywhere, ACDNet, Bifuse++, and SliceNet. A depth calibration-based error protocol is proposed for meaningful cross-model evaluation. The work addresses challenges in 360° vision for robotic navigation and immersive scene understanding, where unintentional pose variations and equirectangular distortions affect depth estimation reliability.
Key facts
- Sphere-Depth is a novel public benchmark for depth estimation from spherical images.
- It evaluates robustness of monocular depth estimation models under simulated camera pose perturbations.
- Models tested include Depth Anything, Depth Anywhere, ACDNet, Bifuse++, and SliceNet.
- A depth calibration-based error protocol is proposed for fair evaluation across models.
- The benchmark addresses geometric distortions in equirectangular projections and real-world pose variations.
- Application domains include robotic navigation and immersive scene understanding.
- The work is published on arXiv with ID 2604.23432.
- The study focuses on 360° vision challenges.
Entities
Institutions
- arXiv