ATLAS: New Tool for Robotic Action Segmentation Annotation
ATLAS is a groundbreaking tool designed for segmenting robotic actions over extended periods. It addresses the limitations of existing tools that typically focus just on visual data or can't handle robot-specific time-series information. With ATLAS, users can visualize various data types in sync, like multi-angle videos and proprioceptive signals, and annotate action boundaries, labels, and task results. It supports popular robotics data formats, including ROS bags and RLDS. This tool is vital for training and evaluating methods related to action segmentation and manipulation policy learning, as precise timing of actions is key. Additionally, ATLAS aims to make it easier for researchers to adapt to different dataset formats.
Key facts
- ATLAS is an annotation tool for long-horizon robotic action segmentation.
- It supports time-synchronized visualization of multi-modal robotic data.
- Data types include multi-view video and proprioceptive signals.
- It annotates action boundaries, action labels, and task outcomes.
- It natively handles ROS bags and RLDS dataset formats.
- Existing tools are limited to vision-only data or lack robot-specific signal support.
- The tool is designed to reduce adaptation effort for different dataset formats.
- It is crucial for training and evaluating action segmentation and policy learning methods.
Entities
—