ARTFEED — Contemporary Art Intelligence

EEG Foundation Models Interpreted via Layer-Wise Relevance Propagation

ai-technology · 2026-05-13

A recent investigation published on arXiv (2605.11885) employs attention-aware Layer-wise relevance propagation (LRP) to interpret foundation models (FMs) based on electroencephalography (EEG). This study adapts LRP, which was originally applied to convolutional neural networks (CNNs), for use with Transformer architectures that form the basis of contemporary EEG-FMs. In tasks involving motor imagery, LRP uncovered 'Clever Hans' behavior, indicating that models favored ocular signals linked to the task rather than the intended motor signals. In predicting emotions, a consistent dependence on a central cluster of electrodes was found, hinting at a potential sensorimotor signature of arousal. These results illustrate LRP's capacity to validate model outputs and reveal new, biologically credible hypotheses, helping to overcome challenges in the broader implementation of EEG-FMs for diagnostics and brain-computer interfaces.

Key facts

  • arXiv paper 2605.11885 investigates LRP for EEG foundation models.
  • LRP is extended from CNN-based EEG models to Transformer architectures.
  • Motor imagery analysis uncovered 'Clever Hans' behavior using ocular signals.
  • Affect prediction revealed reliance on a central electrode cluster.
  • LRP can verify EEG-FM decisions and generate biologically plausible hypotheses.
  • EEG-FMs promise scaling deep learning despite data scarcity.
  • Opaque nature of FMs is a barrier to adoption in diagnostics and BCIs.
  • Study uses post-hoc attribution method attention-aware LRP.

Entities

Institutions

  • arXiv

Sources