2D Spatiotemporal Convolutions Improve EEG Classification Efficiency
A new study from arXiv (2605.03874) investigates the use of bi-dimensional (2D) spatiotemporal convolutions in shallow Convolutional Neural Networks (CNNs) for EEG signal classification. Unlike conventional models that concatenate independent one-dimensional (1D) convolutions along spatial and temporal dimensions without nonlinear activation, the proposed 2D approach is numerically identical but yields different learning dynamics. Tests on low-dimensional (3-channel) and high-dimensional (22-channel) BCI motor imagery tasks show that 2D convolutions significantly reduce training time in high-dimensional settings while maintaining classification performance. The research also includes a CNN+transformer hybrid model. The authors explore the root causes of this efficiency gain, though the full analysis is truncated in the abstract. The work offers a representation learning perspective for more efficient and explainable EEG classification.
Key facts
- arXiv paper 2605.03874
- Compares 1D and 2D convolutions in CNNs for EEG
- Tests on 3-channel and 22-channel BCI motor imagery tasks
- 2D convolutions reduce training time in high-dimensional tasks
- Performance is maintained with 2D convolutions
- Includes CNN+transformer hybrid model
- No nonlinear activation between concatenated 1D convolutions
- Study focuses on representation learning for efficiency and explainability
Entities
Institutions
- arXiv