ARTFEED — Contemporary Art Intelligence

Task-Conditioned Latent Alignment for Cross-Session Neural Decoding

ai-technology · 2026-04-27

A new framework called Task-Conditioned Latent Alignment (TCLA) has been introduced by researchers to enhance cross-session neural decoding, particularly when data from the target session is scarce. TCLA employs an autoencoder to derive low-dimensional neural representations from a data-rich source session, subsequently aligning the target's latent representations to the source in a manner conditioned by the task. When tested on datasets from macaque motor and oculomotor center-out tasks, TCLA consistently surpassed baseline techniques that relied exclusively on target-session data, leading to improved decoding performance across various datasets and conditions.

Key facts

  • TCLA is a framework for cross-session neural decoding with limited target-session data.
  • It uses an autoencoder to learn low-dimensional neural representations from a source session.
  • Target latent representations are aligned to the source session in a task-conditioned manner.
  • Evaluated on macaque motor and oculomotor center-out datasets.
  • TCLA consistently improves decoding performance compared to baselines trained only on target-session data.
  • The paper is available on arXiv with ID 2601.19963.
  • The announcement type is replace-cross.
  • The framework addresses the challenge of limited data in recording sessions.

Entities

Institutions

  • arXiv

Sources