AEMG: Self-Supervised Learning Framework for Generalizable EMG Representations
A team of researchers has introduced Any Electromyography (AEMG), marking the first extensive, self-supervised framework for learning representations of EMG signals. This innovative approach reinterprets neuromuscular dynamics in a linguistic manner through a Neuromuscular Contraction Tokenizer (NCT), which converts muscle contractions into structural terms and temporal activation patterns into sentences. AEMG boasts the largest vocabulary of cross-device EMG signals, facilitating smooth transfer across various channel topologies and sampling rates. Experimental results indicate that AEMG enhances zero-shot leave-one-out generalization among subjects, devices, and tasks, effectively tackling data heterogeneity and the issue of limited labels. This research is available on arXiv with the identifier 2605.03462.
Key facts
- AEMG is the first large-scale, self-supervised representation learning framework for EMG.
- It uses a Neuromuscular Contraction Tokenizer (NCT) to translate muscle contractions into structural words.
- Temporal activation patterns are converted into coherent sentences.
- The framework compiles the largest cross-device EMG signal vocabulary.
- It enables seamless transfer across arbitrary channel topologies and sampling rates.
- Experiments demonstrate improved zero-shot leave-one-out generalization.
- The research addresses data heterogeneity, label scarcity, and lack of unified representational framework.
- Published on arXiv with ID 2605.03462.
Entities
Institutions
- arXiv