SensingAgents: Multi-Agent LLM Framework for IMU Activity Recognition
A new multi-agent system called SensingAgents has been developed by researchers to enhance Human Activity Recognition (HAR) using Inertial Measurement Unit (IMU) sensors, utilizing Large Language Models (LLMs). Existing deep learning HAR models struggle with issues such as reliance on labeled data, ambiguity based on position, and insufficient transparency in reasoning. SensingAgents assigns distinct roles to LLM-driven agents: Analyst Agents for analyzing position-specific sensor data (arm, wrist, belt, pocket), Advocate Agents that address sensor discrepancies through both dynamic and static debates, and a Decision Agent that maintains reliability amid sensor drift or failure. Evaluated on the Shoaib dataset, this framework demonstrated notable improvements and addresses crucial challenges in IMU-based HAR, with potential uses in mobile health, smart environments, and human-computer interaction.
Key facts
- SensingAgents is a multi-agent system for IMU activity recognition
- It uses LLM-powered agents in specialized roles
- Analyst Agents handle position-specific sensor analysis
- Advocate Agents resolve sensor conflicts through debates
- Decision Agent ensures reliability under sensor drift or failure
- Evaluated on the Shoaib dataset
- Addresses issues of labeled data dependency and position ambiguity
- Potential applications in mobile health and smart environments
Entities
—