ARTFEED — Contemporary Art Intelligence

Scale-Gest: Adaptive Gesture Detection Framework for Mobile Devices

ai-technology · 2026-05-14

Scale-Gest is an innovative framework for adaptive gesture detection that operates in real-time on mobile devices, catering to varying battery levels. It enhances the detection capabilities through a compact array of tiny-YOLO architectures, introducing several device-specific ACE (Accuracy-Complexity-Energy) profiles by examining different model-resolution-stride settings. A streamlined run-time controller identifies the best ACE mode based on user preferences and battery limitations, while a motion-sensitive ROI gate focuses on hand-gesture tracking to simplify detection complexity. The system's effectiveness is tested in practical driving situations using a temporally-annotated dataset known as Dri. This research tackles the difficulties of on-device gesture detection while adhering to strict energy, memory, and real-time performance requirements, providing optimization avenues beyond current EdgeAI solutions that depend on a static detector.

Key facts

  • Scale-Gest is a run-time adaptive gesture detection framework.
  • It uses a dense family of tiny-YOLO architectures.
  • ACE profiles stand for Accuracy-Complexity-Energy.
  • A lightweight run-time controller selects ACE mode based on user-defined and battery constraints.
  • A motion-aware hand-gesture-tracking ROI gate crops input for reduced complexity.
  • Performance is evaluated in real-world car driving scenarios.
  • A temporally-annotated dataset called Dri is introduced for evaluation.
  • The framework targets mobile devices with varying battery-power levels.

Entities

Sources